# Bases of a (subspace of a) vector space

Watch
Announcements

Page 1 of 1

Go to first unread

Skip to page:

Firstly, regarding the basis vectors of a subspace of a vector space e.g. the null space of a 3x5 matrix A is a subspace of R^5. So, are the basis vectors of a set (in general which is the solution set of a linear system) always the vectors associated with free parameters in its general vector parametric solution i.e. the direction vectors of the flat representing the solutions?

EDIT: Terminology used. RS(A) refers to Row Space of the matrix A.

Lin(B3) refers to the linear combinations of the set B3 i.e. the linear combinations of all the vectors in the set B3.

RRE(A) refers to the reduced row echelon form of the matrix A.

Secondly, can someone help explain this to me please - not sure about the reasoning for this basis spanning RS(A) .

So since elementary row operations are linear combinations of rows. It means every row in RRE(A) is a linear combination of the original rows of the matrix A (and so linear combinations of the linear combinations of the rows of A, right?) This is fine, but why does that make RS(RRE(A) a subset of RS(A). I thought it'd make RS(A) a subset of RS(RRE(A))?

Is it because the rows in RRE(A) are linear combinations of the rows of A so RRE(A) is a subset of RS(A) and so RS(RRE(A)) is a subset of RS(A) as any linear combinations of RRE(A) will be a subset of the same set i.e. RS(A)?

Lastly, it says by definition Lin(B3) = RS(RRE(A). Is this because, the row space of RRE(A) is the linear combination of all rows of RRE(A) and B3 is the rows of RRE(A) which have leading 1s implying any other row in RRE(A) is a linear combination of the rows in B3 and therefore, the linear combinations of all of those rows i.e. RS(RRE(A)) is also equal to Lin(B3) from the same reasoning? Also, (simply because any rows in RRE(A) that are not in B3 are zero rows, so hence why Lin(B3) = RS(RRE(A)).

EDIT: Terminology used. RS(A) refers to Row Space of the matrix A.

Lin(B3) refers to the linear combinations of the set B3 i.e. the linear combinations of all the vectors in the set B3.

RRE(A) refers to the reduced row echelon form of the matrix A.

Secondly, can someone help explain this to me please - not sure about the reasoning for this basis spanning RS(A) .

So since elementary row operations are linear combinations of rows. It means every row in RRE(A) is a linear combination of the original rows of the matrix A (and so linear combinations of the linear combinations of the rows of A, right?) This is fine, but why does that make RS(RRE(A) a subset of RS(A). I thought it'd make RS(A) a subset of RS(RRE(A))?

Is it because the rows in RRE(A) are linear combinations of the rows of A so RRE(A) is a subset of RS(A) and so RS(RRE(A)) is a subset of RS(A) as any linear combinations of RRE(A) will be a subset of the same set i.e. RS(A)?

Lastly, it says by definition Lin(B3) = RS(RRE(A). Is this because, the row space of RRE(A) is the linear combination of all rows of RRE(A) and B3 is the rows of RRE(A) which have leading 1s implying any other row in RRE(A) is a linear combination of the rows in B3 and therefore, the linear combinations of all of those rows i.e. RS(RRE(A)) is also equal to Lin(B3) from the same reasoning? Also, (simply because any rows in RRE(A) that are not in B3 are zero rows, so hence why Lin(B3) = RS(RRE(A)).

Last edited by Chittesh14; 2 years ago

0

reply

0

reply

Report

#4

(Original post by

Tagging people who might be able to help me .

RDKGames ghostwalker DFranklin

**Chittesh14**)Tagging people who might be able to help me .

RDKGames ghostwalker DFranklin

0

reply

Report

#5

(Original post by

Firstly, regarding the basis vectors of a subspace of a vector space e.g. the null space of a 3x5 matrix A is a subspace of R^5. So, are the basis vectors of a set (in general which is the solution set of a linear system) always the vectors associated with free parameters in its general vector parametric solution i.e. the direction vectors of the flat representing the solutions?

**Chittesh14**)Firstly, regarding the basis vectors of a subspace of a vector space e.g. the null space of a 3x5 matrix A is a subspace of R^5. So, are the basis vectors of a set (in general which is the solution set of a linear system) always the vectors associated with free parameters in its general vector parametric solution i.e. the direction vectors of the flat representing the solutions?

**x**=

**b**) is not a vector space unless

**b**= 0, so talking about the basis of this set doesn't really make sense.

On the other hand, if you have two solutions A

**x**=

**b**and A

**y**=

**b**, then A

**(x-y)**= 0, which means you can get from any solution to any other solution by adding a vector in the null space (i.e. a solution to A

**v**= 0). Which is where you get the connection you're referring to.

0

reply

(Original post by

What does RS(A) mean? I've never seen this terminology. (I can guess what RRE means).

**DFranklin**)What does RS(A) mean? I've never seen this terminology. (I can guess what RRE means).

0

reply

Report

#7

(Original post by

So since elementary row operations are linear combinations of rows. It means every row in RRE(A) is a linear combination of the original rows of the matrix A (and so linear combinations of the linear combinations of the rows of A, right?) This is fine, but why does that make RS(RRE(A) a subset of RS(A). I thought it'd make RS(A) a subset of RS(RRE(A))?

**Chittesh14**)So since elementary row operations are linear combinations of rows. It means every row in RRE(A) is a linear combination of the original rows of the matrix A (and so linear combinations of the linear combinations of the rows of A, right?) This is fine, but why does that make RS(RRE(A) a subset of RS(A). I thought it'd make RS(A) a subset of RS(RRE(A))?

The real thing to "worry" about is actually whether RS(RRE(A)) is strictly smaller than RS(A). (For example, we *could* reduce A to the zero-matrix by row operations, if we simply multiplied every row by 0, and then obviously RS(0) is smaller than RS(A)). But it turns out that this can't happen with elementary row operations, because every such operation is invertible.

Is it because the rows in RRE(A) are linear combinations of the rows of A so RRE(A) is a subset of RS(A) and so RS(RRE(A)) is a subset of RS(A) as any linear combinations of RRE(A) will be a subset of the same set i.e. RS(A)?

Lastly, it says by definition Lin(B3) = RS(RRE(A). Is this because, the row space of RRE(A) is the linear combination of all rows of RRE(A) and B3 is the rows of RRE(A) which have leading 1s implying any other row in RRE(A) is a linear combination of the rows in B3 and therefore, the linear combinations of all of those rows i.e. RS(RRE(A)) is also equal to Lin(B3) from the same reasoning? Also, (simply because any rows in RRE(A) that are not in B3 are zero rows, so hence why Lin(B3) = RS(RRE(A)).

**exact**definitions of the terms. (I don't think it would be "by definition" by the terms I used when I first did this, but it was over 30 years ago, so I don't recall exactly...)

0

reply

(Original post by

The solution set of a linear system (of the form A

On the other hand, if you have two solutions A

**DFranklin**)The solution set of a linear system (of the form A

**x**=**b**) is not a vector space unless**b**= 0, so talking about the basis of this set doesn't really make sense.On the other hand, if you have two solutions A

**x**=**b**and A**y**=**b**, then A**(x-y)**= 0, which means you can get from any solution to any other solution by adding a vector in the null space (i.e. a solution to A**v**= 0). Which is where you get the connection you're referring to.Let's say I have a 3x5 matrix A, i.e. 3 rows and 5 columns. So, each vector in A belongs to R^3. Now when finding the column space, row space or the null space of A - these are all subspaces of the vector spaces: R^3, R^5 and R^5 respectively. So, e.g. N(A) i.e. null space of the matrix A refers to the set of solutions to the linear system Ax = 0. Hence, it is a flat going through the origin and can have its solutions represented through vector parametric form. Now, if it has free parameters i.e. A doesn't have full column rank, then

**are the direction vectors of the system (i.e. the vectors associated with free parameters in the general vector parametric solution always the basis vectors of the set representing N(A) i.e. null space of the matrix A.**

So, what I mean: If the set we are talking about in question e.g. is the set of solutions to a linear system and its general solution can be represented in vector parametric form: are its basis vectors always the vectors associated with free parameters in its general solution in vector parametric form (i.e. the direction vectors of the flat).

This is just in general and not necessarily for any special subspaces like column space of A, row space of A etc.

For example, if we have the linear system Ax = b and it has vector parametric solutions e.g. (x y z) = (1 1 2)T + s*((1 3 1)T) + t*((2 5 7)T) where s, t are real numbers and all of those are column vectors i.e. (1 1 2) transpose.

Then, would the basis vectors for this subspace be the direction vectors i.e. {(1 3 1)T, (2 5 7)T}?

0

reply

Report

#9

(Original post by

Sorry, it's very hard to explain. What I meant is that:

Let's say I have a 3x5 matrix A, i.e. 3 rows and 5 columns. So, each vector in A belongs to R^3. Now when finding the column space, row space or the null space of A - these are all subspaces of the vector spaces: R^3, R^5 and R^5 respectively. So, e.g. N(A) i.e. null space of the matrix A refers to the set of solutions to the linear system Ax = 0. Hence, it is a flat going through the origin and can have its solutions represented through vector parametric form. Now, if it has free parameters i.e. A doesn't have full column rank, then

**Chittesh14**)Sorry, it's very hard to explain. What I meant is that:

Let's say I have a 3x5 matrix A, i.e. 3 rows and 5 columns. So, each vector in A belongs to R^3. Now when finding the column space, row space or the null space of A - these are all subspaces of the vector spaces: R^3, R^5 and R^5 respectively. So, e.g. N(A) i.e. null space of the matrix A refers to the set of solutions to the linear system Ax = 0. Hence, it is a flat going through the origin and can have its solutions represented through vector parametric form. Now, if it has free parameters i.e. A doesn't have full column rank, then

**are the direction vectors of the system (i.e. the vectors associated with free parameters in the general vector parametric solution always the basis vectors of the set representing N(A) i.e. null space of the matrix A.**
This is just in general and not necessarily for any special subspaces like column space of A, row space of A etc.

For example, if we have the linear system Ax = b and it has vector parametric solutions e.g. (x y z) = (1 1 2)T + s*((1 3 1)T) + t*((2 5 7)T) where s, t are real numbers and all of those are column vectors i.e. (1 1 2) transpose.

Then, would the basis vectors for this subspace be the direction vectors i.e. {(1 3 1)T, (2 5 7)T}?

For example, if we have the linear system Ax = b and it has vector parametric solutions e.g. (x y z) = (1 1 2)T + s*((1 3 1)T) + t*((2 5 7)T) where s, t are real numbers and all of those are column vectors i.e. (1 1 2) transpose.

Then, would the basis vectors for this subspace be the direction vectors i.e. {(1 3 1)T, (2 5 7)T}?

0

reply

**DFranklin**)

The solution set of a linear system (of the form A

**x**=

**b**) is not a vector space unless

**b**= 0, so talking about the basis of this set doesn't really make sense.

On the other hand, if you have two solutions A

**x**=

**b**and A

**y**=

**b**, then A

**(x-y)**= 0, which means you can get from any solution to any other solution by adding a vector in the null space (i.e. a solution to A

**v**= 0). Which is where you get the connection you're referring to.

So just to clarify, basically, the flat corresponding to the linear system Ax = b is all the graph of all the vectors x for which that linear system is satisfied. The direction vectors of this flat in vector parametric equation are the vectors in the null space of A (i.e vectors associated with free parameters in VPE form) and these allow you to go from one solution to another e.g. from x to y to z etc... where (Ax=Ay=Az) = b.

In a similar way, the basis vectors of the set representing this flat i.e. {x | Ax = b} are the vectors {v1, v2, ...} such that they are linearly independent and span the set (x | Ax = b), so any vector in the set (i.e. any solution to the system Ax = b) can be reached through them

(So, they basically correspond to the direction vectors of the system as they take you to any solution x when the solutions are represented on a graph, and in sets: to any vector x satisfying Ax = b when taken linear combinations of those vectors, in the set {x | Ax = b}.

Sorry, find it hard to write clearly .

Last edited by Chittesh14; 2 years ago

0

reply

(Original post by

A linear combination of linear combinations is still a linear combination (at the end of the day, if you group everything by the row vectors , you've always got something of the form , so a linear combination).

The real thing to "worry" about is actually whether RS(RRE(A)) is strictly smaller than RS(A). (For example, we *could* reduce A to the zero-matrix by row operations, if we simply multiplied every row by 0, and then obviously RS(0) is smaller than RS(A)). But it turns out that this can't happen with elementary row operations, because every such operation is invertible.

If that works for you as an explanation, then yes. (There are a lot of almost identical ways of explaining this, so provided you're confident about one of them, and that you could give more detail if needed, then that's fine).

I'm not sure it's possible to answer this without

**DFranklin**)A linear combination of linear combinations is still a linear combination (at the end of the day, if you group everything by the row vectors , you've always got something of the form , so a linear combination).

The real thing to "worry" about is actually whether RS(RRE(A)) is strictly smaller than RS(A). (For example, we *could* reduce A to the zero-matrix by row operations, if we simply multiplied every row by 0, and then obviously RS(0) is smaller than RS(A)). But it turns out that this can't happen with elementary row operations, because every such operation is invertible.

If that works for you as an explanation, then yes. (There are a lot of almost identical ways of explaining this, so provided you're confident about one of them, and that you could give more detail if needed, then that's fine).

I'm not sure it's possible to answer this without

**exact**definitions of the terms. (I don't think it would be "by definition" by the terms I used when I first did this, but it was over 30 years ago, so I don't recall exactly...)Oh right, why do I have to worry whether RS(RRE(A)) is strictly smaller than RS(A)? Is it because I'm trying to show they're equal? Thank you for the explanation of why it can't be strictly smaller, that was really nice didn't think of that.

Thanks, was just making sure I know the reasoning behind the explanation, else I wouldn't be able to prove it if I had to. Thank you for clarifying.

No problem, I'll probably have to ask the lecturer regarding the notes, as some stuff is hard to understand as it says by definition.

0

reply

(Original post by

Yes; the second paragraph in the post of mine you're replying to explains why.

Again, what subspace are you talking about? (The only logical one would be N(A), but you're not exactly being explicit).

**DFranklin**)Yes; the second paragraph in the post of mine you're replying to explains why.

Again, what subspace are you talking about? (The only logical one would be N(A), but you're not exactly being explicit).

For example, if we have the matrix A on the left hand side of (A|b) representing the matrix Ax = b i.e. the coefficient matrix of the linear system Ax = b. So as we have now confirmed. The solution set to this system Ax = b is a flat in R5. In fact, it is a 2 dimensional flat in R5 and it has direction vectors as shown, and these would be the basis vectors of the set: {x | Ax = b} ?

Also, I just meant regarding the matrix A. We have subspaces linked to A i.e. CS(A), RS(A) and N(A) which all go through the origin and they stand for: column space of A, row space of A and null space of A respectively.

attach]802740[/attach]

Now I am using a different matrix A. Here, the column space of A is shown. So, it is the set: a(c1) + b(c2) + ... + e(c5) where c1,...,c5 represent the 1st, ..., 5th columns of A respectively. Here the column space of A is a 2 dimensional subspace of R3.

So, it is the linear combination of all the columns of A.

Now, the vector parametric representation of A would be: the direction vectors of A (but they have to be linearly independent in VPE form, so they will be the linearly independent columns of A). Hence, can I say the basis vectors of A correspond to the direction vectors of A i.e. the linearly independent columns of the matrix A?

Last edited by Chittesh14; 2 years ago

0

reply

X

Page 1 of 1

Go to first unread

Skip to page:

### Quick Reply

Back

to top

to top