Elementary transformations of systems. Elementary matrix transformations Theorem on elementary transformations of Slough

Elementary transformations include:

1) Adding to both sides of one equation the corresponding parts of the other, multiplied by the same number, not equal to zero.

2) Rearranging the equations.

3) Removing from the system equations that are identities for all x.

KRONECKER–CAPELLI THEOREM

(system compatibility condition)

(Leopold Kronecker (1823-1891) German mathematician)

Theorem: A system is consistent (has at least one solution) if and only if the rank of the system matrix is ​​equal to the rank of the extended matrix.

Obviously, system (1) can be written as:

x 1 + x 2 + … + x n

Proof.

1) If a solution exists, then the column of free terms is a linear combination of the columns of matrix A, which means adding this column to the matrix, i.e. transition А®А * do not change the rank.

2) If RgA = RgA *, then this means that they have the same basic minor. The column of free terms is a linear combination of the columns of the basis minor, so the notation above is correct.

Example. Determine the compatibility of a system of linear equations:

~ . RgA = 2.

A* = RgA* = 3.

The system is inconsistent.

Example. Determine the compatibility of a system of linear equations.

A = ; = 2 + 12 = 14 ¹ 0; RgA = 2;

A* =

RgA* = 2.

The system is collaborative. Solutions: x 1 = 1; x 2 =1/2.

2.6 GAUSS METHOD

(Carl Friedrich Gauss (1777-1855) German mathematician)

Unlike the matrix method and Cramer's method, the Gaussian method can be applied to systems of linear equations with an arbitrary number of equations and unknowns. The essence of the method is the sequential elimination of unknowns.

Consider a system of linear equations:

Divide both sides of the 1st equation by a 11 ¹ 0, then:

1) multiply by a 21 and subtract from the second equation

2) multiply by a 31 and subtract from the third equation

, Where d 1 j = a 1 j /a 11, j = 2, 3, …, n+1.

d ij = a ij – a i1 d 1j i = 2, 3, … , n; j = 2, 3, … , n+1.

Example. Solve a system of linear equations using the Gauss method.

, from where we get: x 3 = 2; x 2 = 5; x 1 = 1.

Example. Solve the system using the Gaussian method.

Let's create an extended matrix of the system.

Thus, the original system can be represented as:

, from where we get: z = 3; y = 2; x = 1.

The answer obtained coincides with the answer obtained for this system by the Cramer method and the matrix method.

To solve it yourself:

Answer: (1, 2, 3, 4).

TOPIC 3. ELEMENTS OF VECTOR ALGEBRA

BASIC DEFINITIONS

Definition. Vector called a directed segment (an ordered pair of points). Vectors also include null a vector whose beginning and end coincide.

Definition. Length (module) vector is the distance between the beginning and end of the vector.

Definition. The vectors are called collinear, if they are located on the same or parallel lines. The null vector is collinear to any vector.

Definition. The vectors are called coplanar, if there is a plane to which they are parallel.

Collinear vectors are always coplanar, but not all coplanar vectors are collinear.

Definition. The vectors are called equal, if they are collinear, identically directed and have the same modules.

All vectors can be brought to a common origin, i.e. construct vectors that are respectively equal to the data and have a common origin. From the definition of equality of vectors it follows that any vector has infinitely many vectors equal to it.

Definition. Linear operations over vectors is called addition and multiplication by a number.

The sum of vectors is the vector -

Work - , and is collinear.

The vector is codirectional with the vector ( ) if a > 0.

The vector is oppositely directed with the vector ( ¯ ), if a< 0.

PROPERTIES OF VECTORS

1) + = + - commutativity.

2) + ( + ) = ( + )+

5) (a×b) = a(b) – associativity

6) (a+b) = a + b - distributivity

7) a( + ) = a + a

Definition.

1) Basis in space any 3 non-coplanar vectors taken in a certain order are called.

2) Basis on a plane any 2 non-collinear vectors taken in a certain order are called.

3)Basis Any non-zero vector on a line is called.

§7. Systems of linear equations

Equivalent systems. Elementary transformations of a system of linear equations.

Let WITH– field of complex numbers. Equation of the form

Where
, is called a linear equation with n unknown
. Ordered set
,
is called a solution to equation (1) if .

System m linear equations with n unknowns is a system of equations of the form:

- coefficients of the system of linear equations, - free members.

Rectangular table

,

called the size matrix
. Let us introduce the following notation: - i-th row of the matrix,
- k-th column of the matrix. Matrix A also designate
or
.

The following matrix row transformations A are called elementary:
) null row exception; ) multiplying all elements of any string by a number
; ) adding to any string any other string multiplied by
. Similar transformations of matrix columns A are called elementary matrix transformations A.

The first non-zero element (counting from left to right) of any row of the matrix A is called the leading element of that line.

Definition. Matrix
is called stepwise if the following conditions are met:

1) zero rows of the matrix (if any) are located below non-zero ones;

2) if
leading elements of the matrix rows, then

Any non-zero matrix A can be reduced to an echelon matrix using rowwise elementary transformations.

Example. Let's present the matrix
to the step matrix:
~
~
.

Matrix made up of system coefficients linear equations (2) are called the main matrix of the system. Matrix
obtained from adding a column of free terms is called the extended matrix of the system.

An ordered set is called a solution to a system of linear equations (2) if it is a solution to every linear equation of this system.

A system of linear equations is called consistent if it has at least one solution, and inconsistent if it has no solutions.

A system of linear equations is called definite if it has a single solution, and indefinite if it has more than one solution.

The following transformations of a system of linear equations are called elementary:

) exclusion from the system of equations of the form ;

) multiplying both sides of any equation by
,
;

) adding to any equation any other equation multiplied by,.

Two systems of linear equations from n unknowns are called equivalent if they are not compatible or their solution sets coincide.

Theorem. If one system of linear equations is obtained from another through elementary transformations like ), ), then it is equivalent to the original one.

Solving a system of linear equations by eliminating unknowns (Gauss method).

Let the system be given m linear equations with n unknown:

If system (1) contains an equation of the form

then this system is not compatible.

Let us assume that system (1) does not contain an equation of the form (2). Let in system (1) the coefficient of the variable x 1 in the first equation
(if this is not so, then by rearranging the equations we will achieve that, since not all coefficients for x 1 are equal to zero). Let us apply the following chain of elementary transformations to the system of linear equations (1):


, add to the second equation;

First equation multiplied by
, add to the third equation and so on;

First equation multiplied by
, add to the last equation of the system.

As a result, we obtain a system of linear equations (in what follows we will use the abbreviation CLU for a system of linear equations) equivalent to system (1). It may turn out that in the resulting system not a single equation with number i, i 2, does not contain unknown x 2. Let k this is the least natural number that is unknown x k is contained in at least one equation with number i, i 2. Then the resulting system of equations has the form:

System (3) is equivalent to system (1). Let us now apply to the subsystem
systems of linear equations (3) reasoning that were applied to SLE (1). And so on. As a result of this process, we arrive at one of two outcomes.

1. Let us obtain an SLE containing an equation of the form (2). In this case, SLU (1) is inconsistent.

2. Elementary transformations applied to SLE (1) do not lead to a system containing an equation of the form (2). In this case, SLE (1) by elementary transformations
is reduced to a system of equations of the form:

(4)

where, 1< k < l < . . .< s,

The system of linear equations of the form (4) is called stepwise. The following two cases are possible here.

A) r= n, then system (4) has the form

(5)

System (5) has a unique solution. Consequently, system (1) also has a unique solution.

B) r< n. In this case, the unknowns
in system (4) are called the main unknowns, and the remaining unknowns in this system are called free (their number is equal to n- r). Let us assign arbitrary numerical values ​​to the free unknowns, then SLE (4) will have the same form as system (5). From it, the main unknowns are determined uniquely. Thus, the system has a solution, that is, it is consistent. Since free unknowns were given arbitrary numerical values ​​from WITH, then system (4) is uncertain. Consequently, system (1) is also uncertain. By expressing the main unknowns in SLE (4) in terms of free unknowns, we obtain a system called the general solution of system (1).

Example. Solve a system of linear equations using the method G aussa

Let's write out the extended matrix of the system of linear equations and, using elementary row-wise transformations, reduce it to a step matrix:

~

~
~
~

~ . Using the resulting matrix, we restore the system of linear equations:
This system is equivalent to the original system. Let us then take as the main unknowns
free unknown. Let us express the main unknowns only in terms of free unknowns:

We received the general solution of SLU. Let then

(5, 0, -5, 0, 1) – a particular solution of the SNL.

Problems to solve independently

1. Find a general solution and one particular solution to the system of equations by eliminating unknowns:

1)
2)

4)
6)

2. Find for different parameter values A general solution to the system of equations:

1)
2)

3)
4)

5)
6)

§8. Vector spaces

The concept of vector space. The simplest properties.

Let V ≠ Ø, ( F, +,∙) – field. We will call the elements of the field scalars.

Display φ : F× V –> V is called the operation of multiplying elements of a set V to scalars from the field F. Let's denote φ (λ,a) through λa product of an element A to scalar λ .

Definition. A bunch of V with a given algebraic operation of adding elements of a set V and multiplication of set elements V to scalars from the field F is called a vector space over the field F if the following axioms hold:

Example. Let F field, F n = {(a 1 , a 2 , … , a n) | a i F (i=)). Each element of the set F n called n-dimensional arithmetic vector. Let us introduce the addition operation n-dimensional vectors and multiplication n-dimensional vector onto a scalar from the field F. Let
. Let's put = ( a 1 + b 1 , … , a n + b n), = (λ a 1 , λ a 2 , … , λ a n). A bunch of F n with respect to the introduced operations is a vector space, and it is called n-dimensional arithmetic vector space over the field F.

Let V- vector space over the field F, ,
. The following properties take place:

1)
;

3)
;

4)
;

Proof of property 3.

From equality according to the law of reduction in the group ( V,+) we have
.

Linear dependence, independence of vector systems.

Let V– vector space over the field F,

. A vector is called a linear combination of a system of vectors
. The set of all linear combinations of a system of vectors is called the linear span of this system of vectors and is denoted by .

Definition. A system of vectors is called linearly dependent if such scalars exist
not all equal to zero, that

If equality (1) is satisfied if and only if λ 1 = λ 2 = … = =λ m=0, then the system of vectors is called linearly independent.

Example. Find out whether the system of vectors is = (1,-2,2), =(2,0, 1), = (-1, 3, 4) space R 3 linearly dependent or independent.

Solution. Let λ 1, λ 2, λ 3
And

 |=> (0,0,0) – solution of the system. Therefore, the system of vectors is linearly independent.

Properties of linear dependence and independence of a system of vectors.

1. A system of vectors containing at least one zero vector is linearly dependent.

2. A system of vectors containing a linearly dependent subsystem is linearly dependent.

3. System of vectors, where
is linearly dependent if and only if at least one vector of this system different from the vector is a linear combination of the vectors preceding it.

4. If the system of vectors is linearly independent, and the system of vectors
linearly dependent, then the vector can be represented as a linear combination of vectors and, moreover, in a unique way.

Proof. Since the system of vectors is linearly dependent, then
not all equal to zero, that

In vector equality (2) λ m+1 ≠ 0. Assuming that λ m+1 =0, then from (2) => It follows that the system of vectors is linearly dependent, since λ 1 , λ 2 , … , λ m not all are equal to zero. We came to a contradiction with the condition. From (1) => where
.

Let the vector also be represented in the form: Then from the vector equality
due to the linear independence of the system of vectors it follows that
1 = β 1 , …, m = β m .

5. Let two systems of vectors be given and
, m>k. If each vector of a system of vectors can be represented as a linear combination of a system of vectors, then the system of vectors is linearly dependent.

Basis, rank of the vector system.

A finite system of space vectors V over the field F denote by S.

Definition. Any linearly independent subsystem of a system of vectors S called the basis of the system of vectors S, if any vector of the system S can be represented as a linear combination of a system of vectors.

Example. Find the basis of the vector system = (1, 0, 0), = (0, 1, 0),

= (-2, 3, 0) R 3 . The system of vectors is linearly independent, since, according to property 5, the system of vectors is obtained from the system of vectors Since educational allowance basics electromechanotronics: educationalallowance basics electrical engineering" ; ...

  • Educational literature 2000-2008 (1)

    Literature

    Mathematics Mathematics Lobkova N.I. Basics linear algebra And analytical geometry: educationalallowance/ N.I. Lobkova, M.V. Lagunova... design according to basics electromechanotronics: educationalallowance/ PGUPS. Caf. "Theoretical basics electrical engineering" ; ...

  • Let – system of vectors m from . Basic elementary transformations of the vector system are

    1. - adding to one of the vectors (vector) a linear combination of the others.

    2. - multiplication of one of the vectors (vector) by a number not equal to zero.

    3. rearrangement of two vectors () in places. Systems of vectors will be called equivalent (designation) if there is a chain of elementary transformations that transforms the first system into the second.

    Let us note the properties of the introduced concept of vector equivalence

    (reflexivity)

    It follows that (symmetry)

    If and , then (transitivity) Theorem. If a system of vectors is linearly independent, and it is equivalent, then the system is linearly independent. Proof. Obviously, it is enough to prove the theorem for a system obtained from using one elementary transformation. Let us assume that the system of vectors is linearly independent. Then it follows that . Let the system be obtained from using one elementary transformation. Obviously, rearranging vectors or multiplying one of the vectors by a number not equal to zero does not change the linear independence of the system of vectors. Let us now assume that the system of vectors is obtained from the system by adding to the vector a linear combination of the rest, . It is necessary to establish that (1) it follows that Since , then from (1) we obtain . (2)

    Because system is linearly independent, then from (2) it follows that for all .

    From here we get . Q.E.D.

    57. Matrices. addition of matrices, multiplication of a matrix by a scalar of a matrix as a vector space its dimension.

    Matrix type: square

    Matrix addition



    Properties of matrix addition:

    1.commutativity: A+B = B+A;

    Multiplying a matrix by a number

    Multiplying matrix A by the number ¥ (designation: ¥A) consists in constructing matrix B, the elements of which are obtained by multiplying each element of matrix A by this number, that is, each element of matrix B is equal to: Bij=¥Aij

    Properties of multiplying matrices by a number:

    2. (λβ)A = λ(βA)

    3. (λ+β)A = λA + βA

    4. λ(A+B) = λA + λB

    Row vector and column vector

    Matrices of size m x 1 and 1 x n are elements of the spaces K^n and K^m, respectively:

    a matrix of size m x1 is called a column vector and has a special notation:

    A matrix of size 1 x n is called a row vector and has a special notation:

    58. Matrices. Addition and multiplication of matrices. Matrices as a ring, properties of the matrix ring.

    A matrix is ​​a rectangular table of numbers consisting of m equal-length rows or n equal-length strobes.

    aij is a matrix element that is located in the i-th row and j-th column.

    Matrix type: square

    A square matrix is ​​a matrix with an equal number of columns and rows.

    Matrix addition

    Addition of matrices A + B is the operation of finding a matrix C, all elements of which are equal to the pairwise sum of all corresponding elements of matrices A and B, that is, each element of the matrix is ​​equal to Cij = Aij + Bij

    Properties of matrix addition:

    1.commutativity: A+B = B+A;

    2.associativity: (A+B)+C =A+(B+C);

    3.addition with zero matrix: A + Θ = A;

    4.existence of the opposite matrix: A + (-A) = Θ;

    All properties of linear operations repeat the axioms of linear space and therefore the theorem is valid:

    The set of all matrices of the same size mxn with elements from the field P (the field of all real or complex numbers) forms a linear space over the field P (each such matrix is ​​a vector of this space).

    Matrix multiplication

    Matrix multiplication (designation: AB, less often with the multiplication sign A x B) is the operation of calculating matrix C, each element of which is equal to the sum of the products of elements in the corresponding row of the first factor and column of the second.

    The number of columns in matrix A must match the number of rows in matrix B, in other words, matrix A must be consistent with matrix B. If matrix A has dimensions m x n, B - n x k, then the dimension of their product AB=C is m x k.

    Properties of matrix multiplication:

    1.associativity (AB)C = A(BC);

    2.non-commutativity (in the general case): AB BA;

    3. the product is commutative in the case of multiplication with the identity matrix: AI = IA;

    4.distributivity: (A+B)C = AC + BC, A(B+C) = AB + AC;

    5.associativity and commutativity with respect to multiplication by a number: (λA)B = λ(AB) = A(λB);

    59.*Invertible matrices. Singular and non-singular elementary transformations of matrix rows. Elementary matrices. Multiplication by elementary matrices.

    inverse matrix- such a matrix A−1, when multiplied by which, the original matrix A results in the identity matrix E:

    Elementary string conversions are called:

    Similarly defined elementary column transformations.

    Elementary transformations reversible.

    The notation indicates that the matrix can be obtained from by elementary transformations (or vice versa).

    Two systems of linear equations from one set x 1 ,..., x n unknowns and, respectively, from m and p equations

    They are called equivalent if their solution sets and coincide (that is, the subsets and in K n coincide, ). This means that: either they are simultaneously empty subsets (i.e., both systems (I) and (II) are inconsistent), or they are simultaneously non-empty, and (i.e., every solution to system I is a solution to system II, and every solution system II is a solution to system I).

    Example 3.2.1.

    Gauss method

    The plan for the algorithm proposed by Gauss was quite simple:

    1. apply sequential transformations to the system of linear equations that do not change the set of solutions (thus we preserve the set of solutions of the original system), and go to an equivalent system that has a “simple form” (the so-called step form);
    2. For " simple type" system (with a step matrix) describe the set of solutions that coincides with the set of solutions of the original system.

    Note that a similar method, “fan-chen,” was already known in ancient Chinese mathematics.

    Elementary transformations of systems of linear equations (rows of matrices)

    Definition 3.4.1 (elementary transformation of type 1). When the i-th equation of the system is added to the k-th equation, multiplied by a number (notation: (i)"=(i)+c(k); i.e., only one i-th equation (i) is replaced with a new equation (i)"=(i)+c(k) ). The new ith equation has the form (a i1 +ca k1)x 1 +...+(a in +ca kn)x n =b i +cb k, or, briefly,

    That is, in the new i-th equation a ij "=a ij +ca kj , b i "=b i +cb k.

    Definition 3.4.2 (elementary transformation of type 2). When the i -th and k -th equations are swapped, the remaining equations do not change (notation: (i)"=(k) , (k)"=(i) ; for coefficients this means the following: for j=1,.. .,n

    Note 3.4.3. For convenience, in specific calculations you can use an elementary transformation of the 3rd type: the i -th equation is multiplied by a non-zero number , (i)"=c(i) .

    Proposition 3.4.4. If we moved from system I to system II using a finite number of elementary transformations of the 1st and 2nd types, then from system II we can return to system I also using elementary transformations of the 1st and 2nd types.

    Proof.

    Note 3.4.5. The statement is also true with the inclusion of an elementary transformation of the 3rd type in the number of elementary transformations. If and (i)"=c(i) , then and (i)=c -1 (i)" .

    Theorem 3.4.6.After sequentially applying a finite number of elementary transformations of the 1st or 2nd type to a system of linear equations, a system of linear equations is obtained that is equivalent to the original one.

    Proof. Note that it is sufficient to consider the case of transition from system I to system II using one elementary transformation and prove the inclusion for sets of solutions (since, by virtue of the proven proposition, from system II we can return to system I and therefore we will have inclusion, i.e. it will be proven equality).

    Elementary matrix transformations include:

    1. Changing the order of rows (columns).

    2. Discarding zero rows (columns).

    3. Multiplying the elements of any row (column) by one number.

    4. Adding to the elements of any row (column) the elements of another row (column), multiplied by one number.

    Systems of linear algebraic equations (Basic concepts and definitions).

    1. System m linear equations with n called unknowns system of equations of the form:

    2.By decision system of equations (1) is called a collection of numbers x 1 , x 2 , … , x n , turning each equation of the system into an identity.

    3. System of equations (1) is called joint, if it has at least one solution; if a system has no solutions, it is called non-joint.

    4. System of equations (1) is called certain, if it has only one solution, and uncertain, if it has more than one solution.

    5. As a result of elementary transformations, system (1) is transformed to a system equivalent to it (i.e., having the same set of solutions).

    To elementary transformations systems of linear equations include:

    1. Discarding null rows.

    2. Changing the order of lines.

    3. Adding to the elements of any row the elements of another row, multiplied by one number.

    Methods for solving systems of linear equations.

    1) Inverse matrix method (matrix method) for solving systems of n linear equations with n unknowns.

    System n linear equations with n called unknowns system of equations of the form:

    Let us write system (2) in matrix form; for this we introduce notation.

    Coefficient matrix for variables:

    X = is a matrix of variables.

    B = is a matrix of free terms.

    Then system (2) will take the form:

    A× X = B– matrix equation.

    Solving the equation, we get:

    X = A -1 × B

    Example:

    ; ;

    1) │A│= 15 + 8 ‒18 ‒9 ‒12 + 20 = 4  0 matrix A -1 exists.

    3)

    Ã =

    4) A -1 = × Ã = ;

    X = A -1 × B

    Answer:

    2) Cramer’s rule for solving systems of n – linear equations with n – unknowns.

    Consider a system of 2 – x linear equations with 2 – unknowns:

    Let's solve this system using the substitution method:

    From the first equation it follows:

    Substituting into the second equation, we get:

    Substituting the value into the formula for, we get:

    The determinant Δ is the determinant of the system matrix;

    Δ x 1 - determinant of the variable x 1 ;

    Δ x 2 - determinant of the variable x 2 ;

    Formulas:

    x 1 =;x 2 =;…,x n = ;Δ  0;

    - are called Cramer's formulas.

    When finding determinants of unknowns X 1 , X 2 ,…, X n the column of coefficients for the variable whose determinant is found is replaced with a column of free terms.

    Example: Solve a system of equations using Cramer's method

    Solution:

    Let us first compose and calculate the main determinant of this system:

    Since Δ ≠ 0, the system has a unique solution, which can be found using Cramer’s rule:

    where Δ 1, Δ 2, Δ 3 are obtained from the determinant of Δ by replacing the 1st, 2nd or 3rd column, respectively, with the column of free terms.

    Thus:

    Gauss method for solving systems of linear equations.

    Consider the system:

    The extended matrix of system (1) is a matrix of the form:

    Gauss method is a method of sequentially eliminating unknowns from the equations of the system, starting from the second equation through m- that equation.

    In this case, by means of elementary transformations, the matrix of the system is reduced to triangular (if m = n and system determinant ≠ 0) or stepwise (if m< n ) form.

    Then, starting from the last equation by number, all unknowns are found.

    Gauss method algorithm:

    1) Create an extended matrix of the system, including a column of free terms.

    2) If A 11  0, then divide the first line by A 11 and multiply by (– a 21) and add the second line. Similarly reach m-that line:

    Page 1 divide by A 11 and multiply by (– A m 1) and add m– that page

    Moreover, from the equations, starting from the second to m– that is, the variable will be excluded x 1 .

    3) At the 3rd step, the second line is used for similar elementary transformations of lines from 3rd to m- Tuyu. This will eliminate the variable x 2, starting from the 3rd line through m– thuyu, etc.

    As a result of these transformations, the system will be reduced to a triangular or stepped shape (in the case of a triangular shape, there will be zeros under the main diagonal).

    Reducing a system to a triangular or stepped shape is called direct Gaussian method, and finding unknowns from the resulting system is called in reverse.

    Example:

    Direct move. Let us present the extended matrix of the system

    using elementary transformations to stepwise form. Let's rearrange the first and second rows of the matrix A b, we get the matrix:

    Let's add the second row of the resulting matrix with the first, multiplied by (‒2), and its third row with the first row, multiplied by (‒7). Let's get the matrix

    To the third row of the resulting matrix we add the second row, multiplied by (‒3), resulting in a step matrix

    Thus, we have reduced this system of equations to a stepwise form:

    ,

    Reverse move. Starting from the last equation of the resulting stepwise system of equations, we sequentially find the values ​​of the unknowns:

    If you find an error, please select a piece of text and press Ctrl+Enter.