Pengguna:Kekavigi/bak pasir: Perbedaan antara revisi

Dari Wikipedia bahasa Indonesia, ensiklopedia bebas
Konten dihapus Konten ditambahkan
k ~
k ~
Baris 1: Baris 1:
{{short description|Gambar invers nol di bawah homomorfisme}}
{{Short description|Determinan dari suatu subbagian dari matriks persegi}}{{About|konsep dalam aljabar linear|konsep ''minor'' dalam teori graf|Graf minor}}


Dalam [[aljabar]], '''kernel''' dari [[homomorfisme]] (fungsi yang mempertahankan [[Struktur aljabar|struktur]]) umumnya [[gambar invers]] dari 0 (kecuali untuk [[Grup (matematika)|grup]] yang operasinya dilambangkan dengan multi, dimana kernel adalah kebalikan dari gambar 1). Kasus khusus yang penting adalah [[Kernel (aljabar linear)|kernel dari peta linear]]. [[Kernel (matriks)|kernel dari matriks]], juga disebut ''ruang nol'', adalah kernel dari peta linear yang ditentukan oleh matriks.
Dalam [[aljabar linear]], '''minor''' dari matriks <math>\mathbf{A}</math> adalah [[determinan]] dari beberapa [[matriks persegi]] kecil, yang dibentuk dengan menghapus satu atau lebih baris-dan-kolom matriks <math>\mathbf{A}</math>. Minor yang diperoleh dengan hanya menghapus satu baris dan satu kolom dari matriks persegi (disebut dengan '''minor pertama''') diperlukan untuk menghitung matriks '''kofaktor''', yang pada gilirannya berguna untuk menghitung determinan dan [[Matriks yang dapat dibalik|invers]] dari matriks persegi.


Kernel homomorfisme direduksi menjadi 0 (atau 1) jika dan hanya jika homomorfisme tersebut adalah [[Fungsi injeksi|injeksi]], Artinya jika gambar invers dari setiap elemen terdiri dari satu elemen. Ini berarti bahwa kernel dapat dilihat sebagai ukuran sejauh mana homomorfisme gagal untuk diinjeksi.<ref>See {{harvnb|Dummit|Foote|2004}} and {{harvnb|Lang|2002}}.</ref>
== Definisi ==


Untuk beberapa jenis struktur, seperti [[grup abelian]] dan [[ruang vektor]], kemungkinan kernel adalah substruktur dari jenis yang sama. Ini tidak selalu terjadi, dan terkadang, kemungkinan kernel telah menerima nama khusus, seperti [[subgrup normal]] untuk kelompok dan [[ideal dua sisi]] untuk [[Cincin (matematika)|cincin]].
=== Minor pertama ===
Jika <math>\mathbf{A}</math> adalah sebuah matriks persegi, maka ''minor'' dari entri baris ke-<math>i</math> dan kolom ke-<math>j</math> matriks tersebut, adalah [[determinan]] dari [[Matriks (matematika)#Submatriks|submatriks]] yang dibentuk dengan menghapus baris ke-<math>i</math> dan kolom ke-<math>j</math>. Determinan ini juga disebut dengan ''minor'' <math>(i,j)</math>, atau ''minor pertama<ref>Burnside, William Snow & Panton, Arthur William (1886) ''[https://books.google.com/books?id=BhgPAAAAIAAJ&pg=PA239 Theory of Equations: with an Introduction to the Theory of Binary Algebraic Form]''.</ref>''. Bilangan ini seringkali dilambangkan <math>M_{i,j}</math>. Bilangan lain yang disebut ''kofaktor'' <math>C_{i,j}</math>, diperoleh dengan mengalikan minor tersebut oleh <math>(-1)^{i+j}</math>.


Kernel memungkinkan untuk menentukan [[objek hasil bagi]] (juga disebut [[Hasil bagi (aljabar universal)|aljabar hasil bagi]] di [[aljabar universal]], dan [[kokernel]] di [[teori kategori]]). Untuk banyak jenis struktur aljabar, [[teorema fundamental homomorfisme]] (atau [[teorema isomorfisme pertama]]) menyatakan bahwa [[Galeri (matematika)|galeri]] dari homomorfisme adalah [[Isomorfisme|isomorfik]] terhadap hasil bagi oleh kernel.


Konsep kernel telah diperluas ke struktur sedemikian rupa sehingga gambar kebalikan dari satu elemen tidak cukup untuk memutuskan apakah homomorfisme adalah injeksi. Dalam kasus ini, kernel adalah [[hubungan kesesuaian]].


Artikel ini adalah survei untuk beberapa jenis kernel penting dalam struktur aljabar.
Untuk mengilustrasikan definisi-definisi tersebut, tinjau matriks <math>3\times3</math> berikut,<math display="block">\begin{bmatrix}
\,\,\,1 & 4 & 7 \\
\,\,\,3 & 0 & 5 \\
-1 & 9 & \!11 \\
\end{bmatrix}</math>Minor <math>M_{2,3}</math> didapatkan dari menghitung determinan dari matriks yang baris ke-2 dan kolom ke-3-nya telah dihapus:<math display="block"> M_{2,3} = \det \begin{bmatrix}
\,\,1 & 4 & \Box\, \\
\,\Box & \Box & \Box\, \\
-1 & 9 & \Box\, \\
\end{bmatrix}= \det \begin{bmatrix}
\,\,\,1 & 4\, \\
-1 & 9\, \\
\end{bmatrix} = (9-(-4)) = 13,</math>dan kofaktor <math>C_{2,3}</math> adalah<math display="block">C_{2,3} = (-1)^{2+3}(M_{2,3}) = -13.</math>


=== Definisi umum ===
== Linear maps ==
{{Main|Kernel (aljabar linear)}}
Misalkan <math>\mathbf{A}</math> adalah matriks berukuran <math>m\times n</math> dan <math>k</math> adalah [[bilangan bulat]] dengan <math>0 < k \leq m</math>, dan <math>k \leq n</math>. ''Minor'' <math>k \times k</math> dari <math>\mathbf{A}</math> adalah determinan dari suatu matriks berukuran <math>k \times k</math> yang diperoleh dengan menghapus <math>m-k</math> baris dan <math>n-k</math> kolom dari ''<math>\mathbf{A}</math>''. Determinan ini juga disebut sebagai ''determinan minor orde-<math>k</math> dari <math>\mathbf{A}</math>'', atau ketika <math>m = n</math>, disebut dengan ''determinan minor ke-''<math>(n-k)</math> dari ''<math>\mathbf{A}</math>''.<ref group="note">Kata "determinan" seringkali dihilangkan, dan kata "derajat" terkadang digunakan sebagai pengganti "orde".</ref> Untuk matriks ''<math>\mathbf{A}</math>'' tersebut, terdapat sebanyak <math display="inline">{m \choose k} \cdot {n \choose k}</math> minor berukuran <math>k \times k</math>. ''Minor orde-nol'' sering didefinisikan bernilai <math>1</math>. Pada kasus matriks persegi, ''minor ke-nol'' sama saja dengan determinan dari matriks.<ref name="Hohn2">Elementary Matrix Algebra (Third edition), Franz E. Hohn, The Macmillan Company, 1973, {{isbn|978-0-02-355950-1}}</ref><ref name="Encyclopedia of Mathematics3">{{cite book|url=http://www.encyclopediaofmath.org/index.php?title=Minor&oldid=30176|title=Encyclopedia of Mathematics|chapter=Minor}}</ref>
Misalkan ''V'' dan ''W'' menjadi [[ruang vektor]] di atas [[Bidang (matematika)|bidang]] (atau lebih umum, [[Modul (matematika)|modul]] di atas [[Gelanggang (matematika)|gelanggang]] dan biarkan ''T'' menjadi [[peta liear]] dari ''V'' ke ''W''. Jika '''0'''<sub>''W''</sub> adalah [[vektor nol]] dari ''W'' , maka kernel ''T'' adalah [[preimage]] dari [[Ruang nol|nol subruang]] {'''0'''<sub>''W''</sub>}; that adalah, [[himpunan bagian]] dari ''V'' yang terdiri dari semua elemen ''V'' yang dipetakan oleh ''T'' ke elemen '''0'''<sub>''W''</sub>. Kernel biasanya dilambangkan sebagai {{math|ker '' T ''}}, atau variasinya:


: <math> \operatorname{ker} T = \{\mathbf{v} \in V : T(\mathbf{v}) = \mathbf{0}_{W}\}\text{.} </math>


Karena peta linier mempertahankan vektor nol, vektor nol '''0'''<sub>''V''</sub> dari ''V'' harus menjadi milik kernel. Transformasi ''T'' bersifat injeksi jika dan hanya jika kernelnya direduksi menjadi subruang nol.


Kernel ker ''T'' selalu merupakan [[subruang linier]] dari ''V'' . Jadi, masuk akal untuk membicarakan tentang [[Ruang hasil bagi (aljabar linear)|ruang hasil bagi]] ''V''/(ker ''T''). Teorema isomorfisme pertama untuk ruang vektor menyatakan bahwa ruang hasil bagi ini adalah [[Isomorfisme alami|isomorfis alami]] ke [[Citra (fungsi)|citra]] dari ''T'' (yang merupakan subruang dari ''W'' ). Akibatnya, [[Dimensi (aljabar linear)|dimensi]] dari ''V'' sama dengan dimensi kernel ditambah dimensi bayangan.
Misalkan <math>1 \le i_1 < i_2 < \cdots < i_k \le m</math> dan <math>1 \le j_1 < j_2 < \cdots < j_k \le n</math> adalah [[barisan]] dari indeks,<ref group="note">Dengan urutan alami, asumsi yang umum digunakan ketika berbicara tentang minor, kecuali dinyatakan lain.</ref> sebut mereka masing-masing sebagai <math>I </math> dan <math>J</math>. Terdapat beberapa notasi untuk menyebut minor<math display="block">\det \left( (\mathbf A_{i_p, j_q})_{p,q = 1, \ldots, k} \right)</math>yang berkorespondensi dengan pilihan-pilihan indeks ini. Tergantung pada sumber yang digunakan, notasi untuk minor dapat berupa <math display="inline">\det_{I,J} \mathbf A</math>, <math display="inline">[\mathbf A]_{I,J}</math>, <math display="inline">M_{I,J}</math>, <math display="inline">M_{i_1, i_2, \ldots, i_k, j_1, j_2, \ldots, j_k}</math>, atau <math>M_{(i),(j)}</math> (dengan <math>(i)</math> melambangkan barisan indeks <math>I </math>, dst.). Lebih lanjut, terdapat dua gaya notasi digunakan dalam literaturː beberapa penulis<ref>Linear Algebra and Geometry, Igor R. Shafarevich, Alexey O. Remizov, Springer-Verlag Berlin Heidelberg, 2013, {{isbn|978-3-642-30993-9}}</ref> menganggap minor dengan indeks <math>I </math> dan <math>J</math>, merujuk pada determinan dari submatriks sesuai definisi di atas, yang anggota-anggotanya berasal dari matriks asli, dengan indeks barisnya ada di <math>I </math> dan indeks kolomnya ada di <math>J</math>. Sedangkan beberapa penulis lainnya, merujuk pada submatriks yang dihasilkan dari menghapus baris-baris di <math>I </math> dan menghapus kolom-kolom di <math>J</math>.<ref name="Hohn2" /> Pilihan notasi yang digunakan perlu dipastikan dari sumber yang digunakan. Artikel ini menggunakan definisi standar<math display="block">M_{i,j} = \det \left( \left( \mathbf A_{p,q} \right)_{p \neq i, q \neq j} \right).</math>
== Penerapan minor dan kofaktor ==


Jika ''V'' dan ''W'' adalah [[Ruang vektor berdimensi-hingga|dimensi-hingga]] dan [[Basis (aljabar linear)|basis]] telah dipilih, maka ''T'' dapat dijelaskan oleh [[Matriks (matematika)|matriks]] ''M'', dan kernel dapat dihitung dengan menyelesaikan [[sistem persamaan linear]] homogen {{nowrap|1=''M'''''v''' = '''0'''}}. Dalam hal ini, kernel ''T'' dapat diidentifikasi ke [[Kernel (matriks)|kernel matriks]] ''M'' , juga disebut "spasi nol" dari ''M'' . Dimensi ruang kosong, disebut nulitas ''M'' , diberikan oleh jumlah kolom ''M'' dikurangi [[Rank (teori matriks)|rank]] dari ''M'' , sebagai konsekuensi dari [[teori peringkat-nullity]].
=== Ekspansi kofaktor dari determinan ===
{{main|Ekspansi Laplace}}


Memecahkan [[persamaan diferensial homogen]] sering kali sama dengan menghitung kernel [[operator diferensial]] tertentu. Misalnya, untuk mencari semua dua kali - [[fungsi terdiferensiasi]] s ''f'' dari [[garis nyata]] ke dirinya sendiri sehingga
Konsep kofaktor sangat berperan dalam rumus [[ekspansi Laplace]], yakni suatu metode untuk menghitung determinan matriks berukuran besar menggunakan determinan matriks-matriks yang berukuran lebih kecil. Untuk sebarang matriks <math>A = (a_{ij})</math> berukuran <math>n \times n</math>, determinan <math>\mathbf A</math> yang dilambangkan dengan <math>\det{(\mathbf A)}</math>, dapat dituliskan sebagai penjumlahan dari perkalian kofaktor-kofaktor setiap baris (maupun setiap kolom) matriks dengan entri-entri matriks yang menghasilkan kofaktor-kofaktor tersebut. Secara lebih matematis, dengan menuliskan <math>C_{ij} = (-1)^{i+j}M_{ij}</math>, ekspansi kofaktor sepanjang baris ke-<math>i</math> dapat dituliskan sebagai<math display="block">\ \det(\mathbf A) = a_{i1}C_{i1} + a_{i2}C_{i2} + a_{i3}C_{i3} + \cdots + a_{in}C_{in} = \sum_{j=1}^{n} a_{ij} C_{ij} =\sum_{j=1}^{n} a_{ij} (-1)^{i+j} M_{ij}. </math>Ekspansi kofaktor sepanjang kolom ke-<math>j </math> dapat dituliskan<math display="block">\det(\mathbf A) = a_{i1}C_{i1} + a_{i2}C_{i2} + a_{i3}C_{i3} + \cdots + a_{in}C_{in} = \sum_{j=1}^{n} a_{ij} C_{ij} =\sum_{j=1}^{n} a_{ij} (-1)^{i+j} M_{ij}. </math>


: <math>xf''(x) + 3f'(x) = f(x),</math>
=== Invers dari matriks ===
{{main|Matriks terbalikkan}}


biarkan ''V'' menjadi ruang dari semua fungsi yang dapat dibedakan dua kali, biarkan ''W'' menjadi ruang dari semua fungsi, dan tentukan operator linier ''T'' dari ''V'' menjadi ''W'' oleh
Salah satunya bisa menuliskan invers dari [[Matriks yang dapat dibalik|matriks invertible]] dengan menghitung kofaktor-kofaktor dengan menggunakan [[aturan Cramer]], seperti berikut. Matriks dibentuk oleh semua dari kofaktor-kofaktor dari sebuah matriks persegi <math>\mathbf{A} </math> disebut ''matriks kofaktor'' (juga disebut ''matriks dari kofaktor atau komatriks'')ː


: <math>(Tf)(x) = xf''(x) + 3f'(x) - f(x)</math>
<math display="block">\mathbf C=\begin{bmatrix}
C_{11} & C_{12} & \cdots & C_{1n} \\
C_{21} & C_{22} & \cdots & C_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
C_{n1} & C_{n2} & \cdots & C_{nn}
\end{bmatrix} </math>


untuk ''f'' di ''V'' dan ''x'' sembarang [[bilangan real]]. Maka semua solusi persamaan diferensial ada di ker ''T'' .
Maka invers dari <math>\mathbf{A} </math> transpose dari matriks kofaktor dikali kebalikan dari determinan <math>A</math>ː


Seseorang dapat mendefinisikan kernel untuk homomorfisme antara modul melalui [[Gelanggang (matematika)|gelanggang]] dengan cara yang analog. Ini termasuk kernel untuk homomorfisme antara [[grup abelian]] sebagai kasus khusus. Contoh ini menangkap esensi kernel secara umum [[kategori abelian]]; lihat [[Kernel (teori kategori)]].
: <math>\mathbf A^{-1} = \frac{1}{\operatorname{det}(\mathbf A)} C^\mathsf{T}.</math>


== Aljabar dengan struktur nonaljabar ==
Transpose dari matriks kofaktor disebut matriks [[adjugat]] (juga disebut ''adjoin klasik'') dari <math>\mathbf{A} </math>.
Kadang-kadang aljabar dilengkapi dengan struktur nonaljabar di samping operasi aljabar mereka. Misalnya, seseorang dapat mempertimbangkan [[grup topologi]] atau [[ruang vektor topologis]], dengan dilengkapi dengan [[Topologi (struktur)|topologi]]. Dalam hal ini, kita mengharapkan homomorfisme ''f'' untuk mempertahankan struktur tambahan ini; dalam contoh topologi, kita ingin ''f'' menjadi [[peta kontinu]]. Prosesnya mungkin mengalami hambatan dengan aljabar hasil bagi, yang mungkin tidak berperilaku baik. Dalam contoh topologi, kita dapat menghindari masalah dengan mensyaratkan bahwa struktur aljabar topologi menjadi [[Ruang Hausdorff|Hausdorff]] (seperti yang biasanya dilakukan); maka kernel (bagaimanapun itu dibangun) akan menjadi [[set tertutup]] dan [[Ruang hasil bagi (topologi)|ruang hasil bagi]] akan berfungsi dengan baik (dan juga Hausdorff).


== Kernel dalam teori kategori ==
Rumus di atas bisa dihaislkan sebagai berikut. Misalkan <math>1 \le i_1 < i_2 < \ldots < i_k \le n</math> dan menjadi <math>1 \le j_1 < j_2 < \ldots < j_k \le n</math> barisan urutan (dalam urutan alami) dari indeks (disini <math>\mathbf{A} </math> adalah sebuat matriks <math>n \times n </math>). Maka<ref name="Prasolov1994">{{cite book|author=Viktor Vasil_evich Prasolov|date=13 June 1994|url=https://books.google.com/books?id=b4yKAwAAQBAJ&pg=PR15|title=Problems and Theorems in Linear Algebra|publisher=American Mathematical Soc.|isbn=978-0-8218-0236-6|pages=15–}}</ref>
Pengertian ''kernel'' dalam [[teori kategori]] adalah generalisasi dari kernel abelian aljabar; lihat [[Kernel (teori kategori)]]. Generalisasi kategorikal dari kernel sebagai hubungan kesesuaian adalah ''[[pasangan kernel]]'' . (Ada juga pengertian [[kernel perbedaan]], atau biner [[Equalizer (matematika)|equalizer]].)


== Lihat pula ==
: <math>[\mathbf A^{-1}]_{I,J} = \pm\frac{[\mathbf A]_{J',I'}}{\det \mathbf A},</math>


* [[Kernel (aljabar linear)]]
di mana <math>I'</math> dan <math>J'</math> melambangkan barisan urutan dari indeks (indeks dalam urutan besar yang wajar, seperti di atas) melengkapi <math>I</math>, <math>J </math>, sehingga setiap indeks <math>1,\dots,n </math> muncul tepat sekali di salah satu <math>I</math> atau <math>I'</math>, tapi tidak di keduanya (demikian pula untuk <math>J </math> dan <math>J'</math>) dan <math>[\mathbf A]_{I,J}</math> melambangkan determinan dari submatriks <math>\mathbf{A} </math> dibentuk dengan memilih baris dari himpunan indeks <math>I</math> dan kolom dari himpunan indeks <math>J </math>. Juga <math>[\mathbf A]_{I,J} = \det \left( (A_{i_p, j_q})_{p,q = 1, \ldots, k} \right)</math>. Sebuah bukti sederhana bisa diberikan menggunakan produk wedge. Tentunya,
* [[Himpunan nol]]


== Catatan ==
: <math>[\mathbf A^{-1}]_{I,J}(e_1\wedge\ldots \wedge e_n) = \pm(\mathbf A^{-1}e_{j_1})\wedge \ldots \wedge(\mathbf A^{-1}e_{j_k})\wedge e_{i'_1}\wedge\ldots \wedge e_{i'_{n-k}}, </math>
<references responsive="1"></references>


== Referensi ==
di mana <math>e_1,\dots,e_n</math> adalah vektor basis. Tindakan oleh <math>\mathbf{A} </math> ada kedua sisi, salah satunya mendapatkan


* {{Cite book|last1=Dummit|first1=David S.|last2=Foote|first2=Richard M.|year=2004|title=Abstract Algebra|publisher=[[John Wiley & Sons|Wiley]]|isbn=0-471-43334-9|edition=3rd|ref=harv}}
: <math>[\mathbf A^{-1}]_{I,J}\det \mathbf A (e_1\wedge\ldots \wedge e_n) = \pm (e_{j_1})\wedge \ldots \wedge(e_{j_k})\wedge (\mathbf A e_{i'_1})\wedge\ldots \wedge (\mathbf A e_{i'_{n-k}})=\pm [\mathbf A]_{J',I'}(e_1\wedge\ldots \wedge e_n). </math>


* {{Cite book|last=Lang|first=Serge|year=2002|title=Algebra|publisher=[[Springer Science+Business Media|Springer]]|isbn=0-387-95385-X|series=[[Graduate Texts in Mathematics]]|ref=harv|authorlink=Serge Lang}}
Tandanya bisa berhasil menjadi <math>(-1)^{ \sum_{s=1}^{k} i_s - \sum_{s=1}^{k} j_s}</math>, jadi tandanya dideterminasikan oleh penjumlahan-penjumlahan pada anggota-anggota di <math>I </math> dan <math>J </math>.


----
=== Inverse of a matrix ===
{{short description|Inverse image of zero under a homomorphism}}
One can write down the inverse of an [[invertible matrix]] by computing its cofactors by using [[Cramer's rule]], as follows. The matrix formed by all of the cofactors of a square matrix '''A''' is called the '''cofactor matrix''' (also called the '''matrix of cofactors''' or, sometimes, ''comatrix''):
In [[algebra]], the '''kernel''' of a [[homomorphism]] (function that preserves the [[Algebraic structure|structure]]) is generally the [[inverse image]] of 0 (except for [[Group (mathematics)|groups]] whose operation is denoted multiplicatively, where the kernel is the inverse image of 1). An important special case is the [[Kernel (linear algebra)|kernel of a linear map]]. The [[Kernel (matrix)|kernel of a matrix]], also called the ''null space'', is the kernel of the linear map defined by the matrix.


The kernel of a homomorphism is reduced to 0 (or 1) if and only if the homomorphism is [[Injective function|injective]], that is if the inverse image of every element consists of a single element. This means that the kernel can be viewed as a measure of the degree to which the homomorphism fails to be injective.<ref>See {{harvp|Dummit|Foote|2004}} and {{harvp|Lang|2002}}.</ref>
: ...


For some types of structure, such as [[Abelian group|abelian groups]] and [[Vector space|vector spaces]], the possible kernels are exactly the substructures of the same type. This is not always the case, and, sometimes, the possible kernels have received a special name, such as [[normal subgroup]] for groups and [[Two-sided ideal|two-sided ideals]] for [[Ring (mathematics)|rings]].
Then the inverse of '''A''' is the transpose of the cofactor matrix times the reciprocal of the determinant of ''A'':


Kernels allow defining [[Quotient object|quotient objects]] (also called [[Quotient (universal algebra)|quotient algebras]] in [[universal algebra]], and [[Cokernel|cokernels]] in [[category theory]]). For many types of algebraic structure, the [[fundamental theorem on homomorphisms]] (or [[first isomorphism theorem]]) states that [[Image (mathematics)|image]] of a homomorphism is [[Isomorphism|isomorphic]] to the quotient by the kernel.
: <math>\mathbf A^{-1} = \frac{1}{\operatorname{det}(\mathbf A)} C^\mathsf{T}.</math>


The concept of a kernel has been extended to structures such that the inverse image of a single element is not sufficient for deciding whether a homomorphism is injective. In these cases, the kernel is a [[congruence relation]].
The transpose of the cofactor matrix is called the [[adjugate]] matrix (also called the ''classical adjoint'') of '''A'''.


This article is a survey for some important types of kernels in algebraic structures.
The above formula can be generalized as follows: Let <math>1 \le i_1 < i_2 < \ldots < i_k \le n</math> and <math>1 \le j_1 < j_2 < \ldots < j_k \le n</math> be ordered sequences (in natural order) of indexes (here '''A''' is an ''n''&#x2009;×&#x2009;''n'' matrix). Then<ref name="Prasolov19942">{{cite book|author=Viktor Vasil_evich Prasolov|date=13 June 1994|url=https://books.google.com/books?id=b4yKAwAAQBAJ&pg=PR15|title=Problems and Theorems in Linear Algebra|publisher=American Mathematical Soc.|isbn=978-0-8218-0236-6|pages=15–}}</ref>


== Survey of examples ==
: <math>[\mathbf A^{-1}]_{I,J} = \pm\frac{[\mathbf A]_{J',I'}}{\det \mathbf A},</math>


=== Linear maps ===
where ''I′'', ''J′'' denote the ordered sequences of indices (the indices are in natural order of magnitude, as above) complementary to ''I'', ''J'', so that every index 1, ..., ''n'' appears exactly once in either ''I'' or ''I′'', but not in both (similarly for the ''J'' and ''J′'') and <math>[\mathbf A]_{I,J}</math> denotes the determinant of the submatrix of '''A''' formed by choosing the rows of the index set ''I'' and columns of index set ''J''. Also, <math>[\mathbf A]_{I,J} = \det \left( (A_{i_p, j_q})_{p,q = 1, \ldots, k} \right)</math>. A simple proof can be given using wedge product. Indeed,
{{Main|Kernel (linear algebra)}}
Let ''V'' and ''W'' be [[Vector space|vector spaces]] over a [[Field (mathematics)|field]] (or more generally, [[Module (mathematics)|modules]] over a [[Ring (mathematics)|ring]]) and let ''T'' be a [[linear map]] from ''V'' to ''W''. If '''0'''<sub>''W''</sub> is the [[zero vector]] of ''W'', then the kernel of ''T'' is the [[preimage]] of the [[Zero space|zero subspace]] {'''0'''<sub>''W''</sub>}; that is, the [[subset]] of ''V'' consisting of all those elements of ''V'' that are mapped by ''T'' to the element '''0'''<sub>''W''</sub>. The kernel is usually denoted as {{nowrap|ker ''T''}}, or some variation thereof:


: <math>[\mathbf A^{-1}]_{I,J}(e_1\wedge\ldots \wedge e_n) = \pm(\mathbf A^{-1}e_{j_1})\wedge \ldots \wedge(\mathbf A^{-1}e_{j_k})\wedge e_{i'_1}\wedge\ldots \wedge e_{i'_{n-k}}, </math>
: <math> \ker T = \{\mathbf{v} \in V : T(\mathbf{v}) = \mathbf{0}_{W}\} . </math>


Since a linear map preserves zero vectors, the zero vector '''0'''<sub>''V''</sub> of ''V'' must belong to the kernel. The transformation ''T'' is injective if and only if its kernel is reduced to the zero subspace.
where <math>e_1, \ldots, e_n</math> are the basis vectors. Acting by '''A''' on both sides, one gets


The kernel ker ''T'' is always a [[linear subspace]] of ''V''. Thus, it makes sense to speak of the [[Quotient space (linear algebra)|quotient space]] {{nowrap|''V'' / (ker ''T'')}}. The first isomorphism theorem for vector spaces states that this quotient space is [[Natural isomorphism|naturally isomorphic]] to the [[Image (function)|image]] of ''T'' (which is a subspace of ''W''). As a consequence, the [[Dimension (linear algebra)|dimension]] of ''V'' equals the dimension of the kernel plus the dimension of the image.
: <math>[\mathbf A^{-1}]_{I,J}\det \mathbf A (e_1\wedge\ldots \wedge e_n) = \pm (e_{j_1})\wedge \ldots \wedge(e_{j_k})\wedge (\mathbf A e_{i'_1})\wedge\ldots \wedge (\mathbf A e_{i'_{n-k}})=\pm [\mathbf A]_{J',I'}(e_1\wedge\ldots \wedge e_n). </math>


If ''V'' and ''W'' are [[Finite-dimensional vector space|finite-dimensional]] and [[Basis (linear algebra)|bases]] have been chosen, then ''T'' can be described by a [[Matrix (mathematics)|matrix]] ''M'', and the kernel can be computed by solving the homogeneous [[system of linear equations]] {{nowrap|1=''M'''''v''' = '''0'''}}. In this case, the kernel of ''T'' may be identified to the [[Kernel (matrix)|kernel of the matrix]] ''M'', also called "null space" of ''M''. The dimension of the null space, called the nullity of ''M'', is given by the number of columns of ''M'' minus the [[Rank (matrix theory)|rank]] of ''M'', as a consequence of the [[rank–nullity theorem]].
The sign can be worked out to be <math>(-1)^{ \sum_{s=1}^{k} i_s - \sum_{s=1}^{k} j_s}</math>, so the sign is determined by the sums of elements in ''I'' and ''J''.


Solving [[Homogeneous differential equation|homogeneous differential equations]] often amounts to computing the kernel of certain [[Differential operator|differential operators]]. For instance, in order to find all twice-[[Differentiable function|differentiable functions]] ''f'' from the [[real line]] to itself such that
=== Penerapan lainnya ===
Diberikan sebuah matriks <math>m \times n</math> dengan entri-entri [[Bilangan riil|real]] (atau entri-entri dari setiap [[Medan (matematika)|bidang]] lainnya) dan [[Rank (teori matriks)|rank]] <math>r</math>, maka terdapat setidaknya satu minor <math>r \times r</math> tak nol, sementara semua minor-minor yang besar adalah nol.


: <math>x f''(x) + 3 f'(x) = f(x),</math>
Kita akan menggunakan notasi berikut untuk minor. Jika <math>\mathbf{A} </math> adalah sebuah matriks <math>m \times n</math>, <math>I </math> adalah [[himpunan bagian]] dari <math>\{1,\dots,m\}</math> dengan anggota <math>k </math>, dan <math>J </math> adalah himpunan bagian dari <math>\{1,\dots,n\}</math> dengan anggota <math>k </math>, maka kita menulis <math>\left[A\right]_{I,J}</math> untuk minor <math>k \times k </math> pada <math>\mathbf{A} </math> yang sesuai dengan baris dengan indeks dalam <math>I </math> dan kolom dengan indeks dalam <math>J </math>.


let ''V'' be the space of all twice differentiable functions, let ''W'' be the space of all functions, and define a linear operator ''T'' from ''V'' to ''W'' by
* Jika <math>I = J</math>, maka <math>\left[A\right]_{I,J}</math> disebut ''minor utama''.
* Jika matriks yang sesuai dengan minor utama adalah kuadrat bagian atas-kiri dari matriks yang besar (yaitu, itu terdiri dari anggota-anggota matriks dalam baris-baris dan kolom-kolom dari <math>1 </math> hingga <math>k </math>), maka minor utama disebut sebuah ''minor utama terkemuka (dari urutan <math>k </math>) atau minor sudut (utama) (dari urutan <math>k </math>)''.<ref name="Encyclopedia of Mathematics3" /> Untuk sebuah matriks persegi <math>n \times n </math>, terdapat minor utama terkemuka <math>n</math>.
* Sebuah ''minor dasar'' dari sebuah matriks adalah determinan dari sebuah submatriks persegi yang dari ukuran maksimal dengan determinan tidak nol.<ref name="Encyclopedia of Mathematics3" />
* Untuk [[matriks Hermite]], minor utama terkemuka bisa digunakan untuk menguji [[Matriks positif-tentu|ketentuan positif]] dan minor utama bisa digunakan untuk menguji [[Matriks positif-kesemitentuan|kesemitentuan positif]]. Lihat [[kriteria Sylvester]] untuk detail lebih lanjut.


: <math>(Tf)(x) = x f''(x) + 3 f'(x) - f(x)</math>
Kedua rumus untuk [[perkalian matriks]] biasa dan [[rumus Cauchy–Binet]] untuk determinan dari produk dua matriks adalah kasus spesial dari pernyataan umum berikut tentang minor-minor dari sebuah produk dua matriks. Andaikan <math>\mathbf{A} </math> adalah sebuah matriks <math>m \times n</math>, <math>\mathbf{B} </math> adalah sebuah matriks <math>n \times p</math>, <math>I </math> adalah [[himpunan bagian]] dari <math>\{1,\dots,m\}</math> dengan anggota <math>k </math> dan <math>J </math> adalah himpunan bagian dari <math>\{1,\dots,p\}</math> dengan anggota <math>k </math>. Maka


for ''f'' in ''V'' and ''x'' an arbitrary [[real number]]. Then all solutions to the differential equation are in {{nowrap|ker ''T''}}.
: <math>[\mathbf{AB}]_{I,J} = \sum_{K} [\mathbf{A}]_{I,K} [\mathbf{B}]_{K,J}\,</math>


One can define kernels for homomorphisms between modules over a [[Ring (mathematics)|ring]] in an analogous manner. This includes kernels for homomorphisms between [[Abelian group|abelian groups]] as a special case. This example captures the essence of kernels in general [[abelian categories]]; see [[Kernel (category theory)]].
di mana penjumlahan meluas ke semua himpunan bagian <math>K</math> dari <math>\{1,\dots,n\}</math> dengan anggota <math>k </math>. Rumus ini merupakan sebuah ekstensi langsung dari rumus Cauchy–Binet.


=== Other applications ===
=== Group homomorphisms ===
Let ''G'' and ''H'' be [[Group (mathematics)|groups]] and let ''f'' be a [[group homomorphism]] from ''G'' to ''H''. If ''e<sub>H</sub>'' is the [[identity element]] of ''H'', then the ''kernel'' of ''f'' is the preimage of the singleton set {''e<sub>H</sub>''}; that is, the subset of ''G'' consisting of all those elements of ''G'' that are mapped by ''f'' to the element ''e<sub>H</sub>''.
Given an ''m''&#x2009;×&#x2009;''n'' matrix with [[Real number|real]] entries (or entries from any other [[Field (mathematics)|field]]) and [[Rank (matrix theory)|rank]] ''r'', then there exists at least one non-zero ''r''&#x2009;×&#x2009;''r'' minor, while all larger minors are zero.


The kernel is usually denoted {{nowrap|ker ''f''}} (or a variation). In symbols:
We will use the following notation for minors: if '''A''' is an ''m''&#x2009;×&#x2009;''n'' matrix, ''I'' is a [[subset]] of {1,...,''m''} with ''k'' elements, and ''J'' is a subset of {1,...,''n''} with ''k'' elements, then we write ['''A''']<sub>''I'',''J''</sub> for the {{nowrap|''k''&thinsp;×&thinsp;''k''}} minor of '''A''' that corresponds to the rows with index in ''I'' and the columns with index in ''J''.


: <math> \ker f = \{g \in G : f(g) = e_{H}\} .</math>
* If ''I'' = ''J'', then ['''A''']<sub>''I'',''J''</sub> is called a ''principal minor''.
* If the matrix that corresponds to a principal minor is a square upper-left [[Matrix (mathematics)#Submatrix|submatrix]] of the larger matrix (i.e., it consists of matrix elements in rows and columns from 1 to ''k'', also known as a leading principal submatrix), then the principal minor is called a ''leading principal minor (of order k)'' or ''corner (principal) minor (of order k)''.<ref name="Encyclopedia of Mathematics">{{cite book|url=http://www.encyclopediaofmath.org/index.php?title=Minor&oldid=30176|title=Encyclopedia of Mathematics|chapter=Minor}}</ref> For an ''n''&#x2009;×&#x2009;''n'' square matrix, there are ''n'' leading principal minors.
* A ''basic minor'' of a matrix is the determinant of a square submatrix that is of maximal size with nonzero determinant.<ref name="Encyclopedia of Mathematics" />
* For [[Hermitian matrix|Hermitian matrices]], the leading principal minors can be used to test for [[Positive-definite matrix|positive definiteness]] and the principal minors can be used to test for [[Positive-semidefinite matrix|positive semidefiniteness]]. See [[Sylvester's criterion]] for more details.


Since a group homomorphism preserves identity elements, the identity element ''e<sub>G</sub>'' of ''G'' must belong to the kernel.
Both the formula for ordinary [[matrix multiplication]] and the [[Cauchy–Binet formula]] for the determinant of the product of two matrices are special cases of the following general statement about the minors of a product of two matrices. Suppose that '''A''' is an ''m''&#x2009;×&#x2009;''n'' matrix, '''B''' is an ''n''&#x2009;×&#x2009;''p'' matrix, ''I'' is a [[subset]] of {1,...,''m''} with ''k'' elements and ''J'' is a subset of {1,...,''p''} with ''k'' elements. Then


The homomorphism ''f'' is injective if and only if its kernel is only the singleton set {''e<sub>G</sub>''}. If ''f'' were not injective, then the non-injective elements can form a distinct element of its kernel: there would exist {{nowrap|''a'', ''b'' &isin; ''G''}} such that {{nowrap|''a'' ≠ ''b''}} and {{nowrap|1=''f''(''a'') = ''f''(''b'')}}. Thus {{nowrap|1=''f''(''a'')''f''(''b'')<sup>−1</sup> = ''e''<sub>''H''</sub>}}. ''f'' is a group homomorphism, so inverses and group operations are preserved, giving {{nowrap|1=''f''(''ab''<sup>−1</sup>) = ''e''<sub>''H''</sub>}}; in other words, {{nowrap|''ab''<sup>−1</sup> &isin; ker ''f''}}, and ker ''f'' would not be the singleton. Conversely, distinct elements of the kernel violate injectivity directly: if there would exist an element {{nowrap|''g'' ≠ ''e''<sub>''G''</sub> &isin; ker ''f''}}, then {{nowrap|1=''f''(''g'') = ''f''(''e''<sub>''G''</sub>) = ''e''<sub>''H''</sub>}}, thus ''f'' would not be injective.
: <math>[\mathbf{AB}]_{I,J} = \sum_{K} [\mathbf{A}]_{I,K} [\mathbf{B}]_{K,J}\,</math>


{{nowrap|ker ''f''}} is a [[subgroup]] of ''G'' and further it is a [[normal subgroup]]. Thus, there is a corresponding [[quotient group]] {{nowrap|''G'' / (ker ''f'')}}. This is isomorphic to ''f''(''G''), the image of ''G'' under ''f'' (which is a subgroup of ''H'' also), by the [[Isomorphism theorems|first isomorphism theorem]] for groups.
where the sum extends over all subsets ''K'' of {1,...,''n''} with ''k'' elements. This formula is a straightforward extension of the Cauchy–Binet formula.


In the special case of [[Abelian group|abelian groups]], there is no deviation from the previous section.
== Lihat pula ==


==== Example ====
* [[Matriks (matematika)#Submatriks|Submatriks]]
Let ''G'' be the [[cyclic group]] on 6 elements {{nowrap|{{mset|0, 1, 2, 3, 4, 5}}}} with [[Modular arithmetic|modular addition]], ''H'' be the cyclic on 2 elements {{nowrap|{{mset|0, 1}}}} with modular addition, and ''f'' the homomorphism that maps each element ''g'' in ''G'' to the element ''g'' modulo 2 in ''H''. Then {{nowrap|ker ''f'' {{=}} {0, 2, 4}}}, since all these elements are mapped to 0<sub>''H''</sub>. The quotient group {{nowrap|''G'' / (ker ''f'')}} has two elements: {{nowrap|{{mset|0, 2, 4}}}} and {{nowrap|{{mset|1, 3, 5}}}}. It is indeed isomorphic to ''H''.


== Catatan kaki ==
=== Ring homomorphisms ===
{{Ring theory sidebar}}
<references group="note" />


Let ''R'' and ''S'' be [[Ring (mathematics)|rings]] (assumed [[Unital algebra|unital]]) and let ''f'' be a [[ring homomorphism]] from ''R'' to ''S''. If 0<sub>''S''</sub> is the [[zero element]] of ''S'', then the ''kernel'' of ''f'' is its kernel as linear map over the integers, or, equivalently, as additive groups. It is the preimage of the [[zero ideal]] {{mset|0<sub>''S''</sub>}}, which is, the subset of ''R'' consisting of all those elements of ''R'' that are mapped by ''f'' to the element 0<sub>''S''</sub>. The kernel is usually denoted {{nowrap|ker ''f''}} (or a variation). In symbols:
== Referensi ==

<references responsive="1"></references>
: <math> \operatorname{ker} f = \{r \in R : f(r) = 0_{S}\} .</math>

Since a ring homomorphism preserves zero elements, the zero element 0<sub>''R''</sub> of ''R'' must belong to the kernel. The homomorphism ''f'' is injective if and only if its kernel is only the singleton set {{mset|0<sub>''R''</sub>}}. This is always the case if ''R'' is a [[Field (mathematics)|field]], and ''S'' is not the [[zero ring]].

Since ker ''f'' contains the multiplicative identity only when ''S'' is the zero ring, it turns out that the kernel is generally not a [[subring]] of ''R.'' The kernel is a sub[[Rng (algebra)|rng]], and, more precisely, a two-sided [[Ideal (ring theory)|ideal]] of ''R''. Thus, it makes sense to speak of the [[quotient ring]] {{nowrap|''R'' / (ker ''f'')}}. The first isomorphism theorem for rings states that this quotient ring is naturally isomorphic to the image of ''f'' (which is a subring of ''S''). (Note that rings need not be unital for the kernel definition).

To some extent, this can be thought of as a special case of the situation for modules, since these are all [[Bimodule|bimodules]] over a ring ''R'':

* ''R'' itself;
* any two-sided ideal of ''R'' (such as ker ''f'');
* any quotient ring of ''R'' (such as {{nowrap|''R'' / (ker ''f'')}}); and
* the [[codomain]] of any ring homomorphism whose domain is ''R'' (such as ''S'', the codomain of ''f'').

However, the isomorphism theorem gives a stronger result, because ring isomorphisms preserve multiplication while module isomorphisms (even between rings) in general do not.

This example captures the essence of kernels in general [[Mal'cev algebra|Mal'cev algebras]].

=== Monoid homomorphisms ===
Let ''M'' and ''N'' be [[Monoid (algebra)|monoids]] and let ''f'' be a [[monoid homomorphism]] from ''M'' to ''N''. Then the ''kernel'' of ''f'' is the subset of the [[direct product]] {{nowrap|''M'' × ''M''}} consisting of all those [[Ordered pair|ordered pairs]] of elements of ''M'' whose components are both mapped by ''f'' to the same element in ''N''. The kernel is usually denoted {{nowrap|ker ''f''}} (or a variation thereof). In symbols:

: <math>\operatorname{ker} f = \left\{\left(m, m'\right) \in M \times M : f(m) = f\left(m'\right)\right\}.</math>

Since ''f'' is a [[Function (mathematics)|function]], the elements of the form {{nowrap|(''m'', ''m'')}} must belong to the kernel. The homomorphism ''f'' is injective if and only if its kernel is only the [[Equality (mathematics)|diagonal set]] {{nowrap|{{mset|(''m'', ''m'') : ''m'' in ''M''}}}}.

It turns out that {{nowrap|ker ''f''}} is an [[equivalence relation]] on ''M'', and in fact a [[congruence relation]]. Thus, it makes sense to speak of the [[quotient monoid]] {{nowrap|''M'' / (ker ''f'')}}. The first isomorphism theorem for monoids states that this quotient monoid is naturally isomorphic to the image of ''f'' (which is a [[submonoid]] of ''N''; for the congruence relation).

This is very different in flavour from the above examples. In particular, the preimage of the identity element of ''N'' is ''not'' enough to determine the kernel of ''f''.

== Universal algebra ==
All the above cases may be unified and generalized in [[universal algebra]].

=== General case ===
Let ''A'' and ''B'' be [[Algebraic structure|algebraic structures]] of a given type and let ''f'' be a homomorphism of that type from ''A'' to ''B''. Then the ''kernel'' of ''f'' is the subset of the [[direct product]] {{nowrap|''A'' × ''A''}} consisting of all those [[Ordered pair|ordered pairs]] of elements of ''A'' whose components are both mapped by ''f'' to the same element in ''B''. The kernel is usually denoted {{nowrap|ker ''f''}} (or a variation). In symbols:

: <math>\operatorname{ker} f = \left\{\left(a, a'\right) \in A \times A : f(a) = f\left(a'\right)\right\}\mbox{.}</math>

Since ''f'' is a [[Function (mathematics)|function]], the elements of the form {{nowrap|(''a'', ''a'')}} must belong to the kernel.

The homomorphism ''f'' is injective if and only if its kernel is exactly the diagonal set {{nowrap|{{mset|(''a'', ''a'') : ''a'' &isin; ''A''}}}}.

It is easy to see that ker ''f'' is an [[equivalence relation]] on ''A'', and in fact a [[congruence relation]]. Thus, it makes sense to speak of the [[Quotient (universal algebra)|quotient algebra]] {{nowrap|''A'' / (ker ''f'')}}. The [[Isomorphism theorem#General|first isomorphism theorem]] in general universal algebra states that this quotient algebra is naturally isomorphic to the image of ''f'' (which is a [[subalgebra]] of ''B'').

Note that the definition of kernel here (as in the monoid example) doesn't depend on the algebraic structure; it is a purely [[Set (mathematics)|set]]-theoretic concept. For more on this general concept, outside of abstract algebra, see [[kernel of a function]].

== Algebras with nonalgebraic structure ==
Sometimes algebras are equipped with a nonalgebraic structure in addition to their algebraic operations. For example, one may consider [[Topological group|topological groups]] or [[Topological vector space|topological vector spaces]], which are equipped with a [[Topology (structure)|topology]]. In this case, we would expect the homomorphism ''f'' to preserve this additional structure; in the topological examples, we would want ''f'' to be a [[continuous map]]. The process may run into a snag with the quotient algebras, which may not be well-behaved. In the topological examples, we can avoid problems by requiring that topological algebraic structures be [[Hausdorff space|Hausdorff]] (as is usually done); then the kernel (however it is constructed) will be a [[closed set]] and the [[Quotient space (topology)|quotient space]] will work fine (and also be Hausdorff).

== Kernels in category theory ==
The notion of ''kernel'' in [[category theory]] is a generalisation of the kernels of abelian algebras; see [[Kernel (category theory)]]. The categorical generalisation of the kernel as a congruence relation is the ''[[kernel pair]]''. (There is also the notion of [[difference kernel]], or binary [[Equalizer (mathematics)|equaliser]].)

== See also ==

* [[Kernel (linear algebra)]]
* [[Zero set]]


== Pranala luar ==
== Notes ==
{{reflist}}


== References ==
* [http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/lecture-19-determinant-formulas-and-cofactors/ MIT Linear Algebra Lecture on Cofactors] at Google Video, from MIT OpenCourseWare
{{refbegin}}
* [http://planetmath.org/encyclopedia/Cofactor.html PlanetMath entry of ''Cofactors''] {{Webarchive|url=https://web.archive.org/web/20120408004640/http://planetmath.org/encyclopedia/Cofactor.html|date=2012-04-08}}
* {{cite book|last1=Dummit|first1=David S.|last2=Foote|first2=Richard M.|year=2004|title=Abstract Algebra|publisher=[[John Wiley & Sons|Wiley]]|isbn=0-471-43334-9|edition=3rd}}
* [http://www.encyclopediaofmath.org/index.php/Minor Springer Encyclopedia of Mathematics entry for ''Minor'']
* {{cite book|last=Lang|first=Serge|year=2002|title=Algebra|publisher=[[Springer Science+Business Media|Springer]]|isbn=0-387-95385-X|series=[[Graduate Texts in Mathematics]]|author-link=Serge Lang}}
{{Aljabar linear}}
{{refend}}
[[Kategori:Aljabar linear]]
[[Kategori:Aljabar linear]]

Revisi per 23 Maret 2024 03.20


Dalam aljabar, kernel dari homomorfisme (fungsi yang mempertahankan struktur) umumnya gambar invers dari 0 (kecuali untuk grup yang operasinya dilambangkan dengan multi, dimana kernel adalah kebalikan dari gambar 1). Kasus khusus yang penting adalah kernel dari peta linear. kernel dari matriks, juga disebut ruang nol, adalah kernel dari peta linear yang ditentukan oleh matriks.

Kernel homomorfisme direduksi menjadi 0 (atau 1) jika dan hanya jika homomorfisme tersebut adalah injeksi, Artinya jika gambar invers dari setiap elemen terdiri dari satu elemen. Ini berarti bahwa kernel dapat dilihat sebagai ukuran sejauh mana homomorfisme gagal untuk diinjeksi.[1]

Untuk beberapa jenis struktur, seperti grup abelian dan ruang vektor, kemungkinan kernel adalah substruktur dari jenis yang sama. Ini tidak selalu terjadi, dan terkadang, kemungkinan kernel telah menerima nama khusus, seperti subgrup normal untuk kelompok dan ideal dua sisi untuk cincin.

Kernel memungkinkan untuk menentukan objek hasil bagi (juga disebut aljabar hasil bagi di aljabar universal, dan kokernel di teori kategori). Untuk banyak jenis struktur aljabar, teorema fundamental homomorfisme (atau teorema isomorfisme pertama) menyatakan bahwa galeri dari homomorfisme adalah isomorfik terhadap hasil bagi oleh kernel.

Konsep kernel telah diperluas ke struktur sedemikian rupa sehingga gambar kebalikan dari satu elemen tidak cukup untuk memutuskan apakah homomorfisme adalah injeksi. Dalam kasus ini, kernel adalah hubungan kesesuaian.

Artikel ini adalah survei untuk beberapa jenis kernel penting dalam struktur aljabar.

Linear maps

Misalkan V dan W menjadi ruang vektor di atas bidang (atau lebih umum, modul di atas gelanggang dan biarkan T menjadi peta liear dari V ke W. Jika 0W adalah vektor nol dari W , maka kernel T adalah preimage dari nol subruang {0W}; that adalah, himpunan bagian dari V yang terdiri dari semua elemen V yang dipetakan oleh T ke elemen 0W. Kernel biasanya dilambangkan sebagai ker T , atau variasinya:

Karena peta linier mempertahankan vektor nol, vektor nol 0V dari V harus menjadi milik kernel. Transformasi T bersifat injeksi jika dan hanya jika kernelnya direduksi menjadi subruang nol.

Kernel ker T selalu merupakan subruang linier dari V . Jadi, masuk akal untuk membicarakan tentang ruang hasil bagi V/(ker T). Teorema isomorfisme pertama untuk ruang vektor menyatakan bahwa ruang hasil bagi ini adalah isomorfis alami ke citra dari T (yang merupakan subruang dari W ). Akibatnya, dimensi dari V sama dengan dimensi kernel ditambah dimensi bayangan.

Jika V dan W adalah dimensi-hingga dan basis telah dipilih, maka T dapat dijelaskan oleh matriks M, dan kernel dapat dihitung dengan menyelesaikan sistem persamaan linear homogen Mv = 0. Dalam hal ini, kernel T dapat diidentifikasi ke kernel matriks M , juga disebut "spasi nol" dari M . Dimensi ruang kosong, disebut nulitas M , diberikan oleh jumlah kolom M dikurangi rank dari M , sebagai konsekuensi dari teori peringkat-nullity.

Memecahkan persamaan diferensial homogen sering kali sama dengan menghitung kernel operator diferensial tertentu. Misalnya, untuk mencari semua dua kali - fungsi terdiferensiasi s f dari garis nyata ke dirinya sendiri sehingga

biarkan V menjadi ruang dari semua fungsi yang dapat dibedakan dua kali, biarkan W menjadi ruang dari semua fungsi, dan tentukan operator linier T dari V menjadi W oleh

untuk f di V dan x sembarang bilangan real. Maka semua solusi persamaan diferensial ada di ker T .

Seseorang dapat mendefinisikan kernel untuk homomorfisme antara modul melalui gelanggang dengan cara yang analog. Ini termasuk kernel untuk homomorfisme antara grup abelian sebagai kasus khusus. Contoh ini menangkap esensi kernel secara umum kategori abelian; lihat Kernel (teori kategori).

Aljabar dengan struktur nonaljabar

Kadang-kadang aljabar dilengkapi dengan struktur nonaljabar di samping operasi aljabar mereka. Misalnya, seseorang dapat mempertimbangkan grup topologi atau ruang vektor topologis, dengan dilengkapi dengan topologi. Dalam hal ini, kita mengharapkan homomorfisme f untuk mempertahankan struktur tambahan ini; dalam contoh topologi, kita ingin f menjadi peta kontinu. Prosesnya mungkin mengalami hambatan dengan aljabar hasil bagi, yang mungkin tidak berperilaku baik. Dalam contoh topologi, kita dapat menghindari masalah dengan mensyaratkan bahwa struktur aljabar topologi menjadi Hausdorff (seperti yang biasanya dilakukan); maka kernel (bagaimanapun itu dibangun) akan menjadi set tertutup dan ruang hasil bagi akan berfungsi dengan baik (dan juga Hausdorff).

Kernel dalam teori kategori

Pengertian kernel dalam teori kategori adalah generalisasi dari kernel abelian aljabar; lihat Kernel (teori kategori). Generalisasi kategorikal dari kernel sebagai hubungan kesesuaian adalah pasangan kernel . (Ada juga pengertian kernel perbedaan, atau biner equalizer.)

Lihat pula

Catatan

Referensi


In algebra, the kernel of a homomorphism (function that preserves the structure) is generally the inverse image of 0 (except for groups whose operation is denoted multiplicatively, where the kernel is the inverse image of 1). An important special case is the kernel of a linear map. The kernel of a matrix, also called the null space, is the kernel of the linear map defined by the matrix.

The kernel of a homomorphism is reduced to 0 (or 1) if and only if the homomorphism is injective, that is if the inverse image of every element consists of a single element. This means that the kernel can be viewed as a measure of the degree to which the homomorphism fails to be injective.[1]

For some types of structure, such as abelian groups and vector spaces, the possible kernels are exactly the substructures of the same type. This is not always the case, and, sometimes, the possible kernels have received a special name, such as normal subgroup for groups and two-sided ideals for rings.

Kernels allow defining quotient objects (also called quotient algebras in universal algebra, and cokernels in category theory). For many types of algebraic structure, the fundamental theorem on homomorphisms (or first isomorphism theorem) states that image of a homomorphism is isomorphic to the quotient by the kernel.

The concept of a kernel has been extended to structures such that the inverse image of a single element is not sufficient for deciding whether a homomorphism is injective. In these cases, the kernel is a congruence relation.

This article is a survey for some important types of kernels in algebraic structures.

Survey of examples

Linear maps

Let V and W be vector spaces over a field (or more generally, modules over a ring) and let T be a linear map from V to W. If 0W is the zero vector of W, then the kernel of T is the preimage of the zero subspace {0W}; that is, the subset of V consisting of all those elements of V that are mapped by T to the element 0W. The kernel is usually denoted as ker T, or some variation thereof:

Since a linear map preserves zero vectors, the zero vector 0V of V must belong to the kernel. The transformation T is injective if and only if its kernel is reduced to the zero subspace.

The kernel ker T is always a linear subspace of V. Thus, it makes sense to speak of the quotient space V / (ker T). The first isomorphism theorem for vector spaces states that this quotient space is naturally isomorphic to the image of T (which is a subspace of W). As a consequence, the dimension of V equals the dimension of the kernel plus the dimension of the image.

If V and W are finite-dimensional and bases have been chosen, then T can be described by a matrix M, and the kernel can be computed by solving the homogeneous system of linear equations Mv = 0. In this case, the kernel of T may be identified to the kernel of the matrix M, also called "null space" of M. The dimension of the null space, called the nullity of M, is given by the number of columns of M minus the rank of M, as a consequence of the rank–nullity theorem.

Solving homogeneous differential equations often amounts to computing the kernel of certain differential operators. For instance, in order to find all twice-differentiable functions f from the real line to itself such that

let V be the space of all twice differentiable functions, let W be the space of all functions, and define a linear operator T from V to W by

for f in V and x an arbitrary real number. Then all solutions to the differential equation are in ker T.

One can define kernels for homomorphisms between modules over a ring in an analogous manner. This includes kernels for homomorphisms between abelian groups as a special case. This example captures the essence of kernels in general abelian categories; see Kernel (category theory).

Group homomorphisms

Let G and H be groups and let f be a group homomorphism from G to H. If eH is the identity element of H, then the kernel of f is the preimage of the singleton set {eH}; that is, the subset of G consisting of all those elements of G that are mapped by f to the element eH.

The kernel is usually denoted ker f (or a variation). In symbols:

Since a group homomorphism preserves identity elements, the identity element eG of G must belong to the kernel.

The homomorphism f is injective if and only if its kernel is only the singleton set {eG}. If f were not injective, then the non-injective elements can form a distinct element of its kernel: there would exist a, bG such that ab and f(a) = f(b). Thus f(a)f(b)−1 = eH. f is a group homomorphism, so inverses and group operations are preserved, giving f(ab−1) = eH; in other words, ab−1 ∈ ker f, and ker f would not be the singleton. Conversely, distinct elements of the kernel violate injectivity directly: if there would exist an element geG ∈ ker f, then f(g) = f(eG) = eH, thus f would not be injective.

ker f is a subgroup of G and further it is a normal subgroup. Thus, there is a corresponding quotient group G / (ker f). This is isomorphic to f(G), the image of G under f (which is a subgroup of H also), by the first isomorphism theorem for groups.

In the special case of abelian groups, there is no deviation from the previous section.

Example

Let G be the cyclic group on 6 elements (0, 1, 2, 3, 4, 5} with modular addition, H be the cyclic on 2 elements (0, 1} with modular addition, and f the homomorphism that maps each element g in G to the element g modulo 2 in H. Then ker f = {0, 2, 4}, since all these elements are mapped to 0H. The quotient group G / (ker f) has two elements: (0, 2, 4} and (1, 3, 5}. It is indeed isomorphic to H.

Ring homomorphisms

Templat:Ring theory sidebar

Let R and S be rings (assumed unital) and let f be a ring homomorphism from R to S. If 0S is the zero element of S, then the kernel of f is its kernel as linear map over the integers, or, equivalently, as additive groups. It is the preimage of the zero ideal (0S}, which is, the subset of R consisting of all those elements of R that are mapped by f to the element 0S. The kernel is usually denoted ker f (or a variation). In symbols:

Since a ring homomorphism preserves zero elements, the zero element 0R of R must belong to the kernel. The homomorphism f is injective if and only if its kernel is only the singleton set (0R}. This is always the case if R is a field, and S is not the zero ring.

Since ker f contains the multiplicative identity only when S is the zero ring, it turns out that the kernel is generally not a subring of R. The kernel is a subrng, and, more precisely, a two-sided ideal of R. Thus, it makes sense to speak of the quotient ring R / (ker f). The first isomorphism theorem for rings states that this quotient ring is naturally isomorphic to the image of f (which is a subring of S). (Note that rings need not be unital for the kernel definition).

To some extent, this can be thought of as a special case of the situation for modules, since these are all bimodules over a ring R:

  • R itself;
  • any two-sided ideal of R (such as ker f);
  • any quotient ring of R (such as R / (ker f)); and
  • the codomain of any ring homomorphism whose domain is R (such as S, the codomain of f).

However, the isomorphism theorem gives a stronger result, because ring isomorphisms preserve multiplication while module isomorphisms (even between rings) in general do not.

This example captures the essence of kernels in general Mal'cev algebras.

Monoid homomorphisms

Let M and N be monoids and let f be a monoid homomorphism from M to N. Then the kernel of f is the subset of the direct product M × M consisting of all those ordered pairs of elements of M whose components are both mapped by f to the same element in N. The kernel is usually denoted ker f (or a variation thereof). In symbols:

Since f is a function, the elements of the form (m, m) must belong to the kernel. The homomorphism f is injective if and only if its kernel is only the diagonal set ((m, m) : m in M}.

It turns out that ker f is an equivalence relation on M, and in fact a congruence relation. Thus, it makes sense to speak of the quotient monoid M / (ker f). The first isomorphism theorem for monoids states that this quotient monoid is naturally isomorphic to the image of f (which is a submonoid of N; for the congruence relation).

This is very different in flavour from the above examples. In particular, the preimage of the identity element of N is not enough to determine the kernel of f.

Universal algebra

All the above cases may be unified and generalized in universal algebra.

General case

Let A and B be algebraic structures of a given type and let f be a homomorphism of that type from A to B. Then the kernel of f is the subset of the direct product A × A consisting of all those ordered pairs of elements of A whose components are both mapped by f to the same element in B. The kernel is usually denoted ker f (or a variation). In symbols:

Since f is a function, the elements of the form (a, a) must belong to the kernel.

The homomorphism f is injective if and only if its kernel is exactly the diagonal set ((a, a) : aA}.

It is easy to see that ker f is an equivalence relation on A, and in fact a congruence relation. Thus, it makes sense to speak of the quotient algebra A / (ker f). The first isomorphism theorem in general universal algebra states that this quotient algebra is naturally isomorphic to the image of f (which is a subalgebra of B).

Note that the definition of kernel here (as in the monoid example) doesn't depend on the algebraic structure; it is a purely set-theoretic concept. For more on this general concept, outside of abstract algebra, see kernel of a function.

Algebras with nonalgebraic structure

Sometimes algebras are equipped with a nonalgebraic structure in addition to their algebraic operations. For example, one may consider topological groups or topological vector spaces, which are equipped with a topology. In this case, we would expect the homomorphism f to preserve this additional structure; in the topological examples, we would want f to be a continuous map. The process may run into a snag with the quotient algebras, which may not be well-behaved. In the topological examples, we can avoid problems by requiring that topological algebraic structures be Hausdorff (as is usually done); then the kernel (however it is constructed) will be a closed set and the quotient space will work fine (and also be Hausdorff).

Kernels in category theory

The notion of kernel in category theory is a generalisation of the kernels of abelian algebras; see Kernel (category theory). The categorical generalisation of the kernel as a congruence relation is the kernel pair. (There is also the notion of difference kernel, or binary equaliser.)

See also

Notes

References