# Bases of a (subspace of a) vector space

Watch
Announcements
#1
Firstly, regarding the basis vectors of a subspace of a vector space e.g. the null space of a 3x5 matrix A is a subspace of R^5. So, are the basis vectors of a set (in general which is the solution set of a linear system) always the vectors associated with free parameters in its general vector parametric solution i.e. the direction vectors of the flat representing the solutions?

EDIT: Terminology used. RS(A) refers to Row Space of the matrix A.
Lin(B3) refers to the linear combinations of the set B3 i.e. the linear combinations of all the vectors in the set B3.
RRE(A) refers to the reduced row echelon form of the matrix A.

Secondly, can someone help explain this to me please - not sure about the reasoning for this basis spanning RS(A) .

So since elementary row operations are linear combinations of rows. It means every row in RRE(A) is a linear combination of the original rows of the matrix A (and so linear combinations of the linear combinations of the rows of A, right?) This is fine, but why does that make RS(RRE(A) a subset of RS(A). I thought it'd make RS(A) a subset of RS(RRE(A))?

Is it because the rows in RRE(A) are linear combinations of the rows of A so RRE(A) is a subset of RS(A) and so RS(RRE(A)) is a subset of RS(A) as any linear combinations of RRE(A) will be a subset of the same set i.e. RS(A)?

Lastly, it says by definition Lin(B3) = RS(RRE(A). Is this because, the row space of RRE(A) is the linear combination of all rows of RRE(A) and B3 is the rows of RRE(A) which have leading 1s implying any other row in RRE(A) is a linear combination of the rows in B3 and therefore, the linear combinations of all of those rows i.e. RS(RRE(A)) is also equal to Lin(B3) from the same reasoning? Also, (simply because any rows in RRE(A) that are not in B3 are zero rows, so hence why Lin(B3) = RS(RRE(A)).
Last edited by Chittesh14; 2 years ago
0
#2

0
#3
Tagging people who might be able to help me .

RDKGames ghostwalker DFranklin
0
2 years ago
#4
(Original post by Chittesh14)
Tagging people who might be able to help me .

RDKGames ghostwalker DFranklin
What does RS(A) mean? I've never seen this terminology. (I can guess what RRE means).
0
2 years ago
#5
(Original post by Chittesh14)
Firstly, regarding the basis vectors of a subspace of a vector space e.g. the null space of a 3x5 matrix A is a subspace of R^5. So, are the basis vectors of a set (in general which is the solution set of a linear system) always the vectors associated with free parameters in its general vector parametric solution i.e. the direction vectors of the flat representing the solutions?
The solution set of a linear system (of the form Ax = b) is not a vector space unless b = 0, so talking about the basis of this set doesn't really make sense.

On the other hand, if you have two solutions Ax = b and Ay = b, then A(x-y) = 0, which means you can get from any solution to any other solution by adding a vector in the null space (i.e. a solution to Av = 0). Which is where you get the connection you're referring to.
0
#6
(Original post by DFranklin)
What does RS(A) mean? I've never seen this terminology. (I can guess what RRE means).
Sorry, edited it in the title post. Very silly of me.
0
2 years ago
#7
(Original post by Chittesh14)
So since elementary row operations are linear combinations of rows. It means every row in RRE(A) is a linear combination of the original rows of the matrix A (and so linear combinations of the linear combinations of the rows of A, right?) This is fine, but why does that make RS(RRE(A) a subset of RS(A). I thought it'd make RS(A) a subset of RS(RRE(A))?
A linear combination of linear combinations is still a linear combination (at the end of the day, if you group everything by the row vectors , you've always got something of the form , so a linear combination).

The real thing to "worry" about is actually whether RS(RRE(A)) is strictly smaller than RS(A). (For example, we *could* reduce A to the zero-matrix by row operations, if we simply multiplied every row by 0, and then obviously RS(0) is smaller than RS(A)). But it turns out that this can't happen with elementary row operations, because every such operation is invertible.

Is it because the rows in RRE(A) are linear combinations of the rows of A so RRE(A) is a subset of RS(A) and so RS(RRE(A)) is a subset of RS(A) as any linear combinations of RRE(A) will be a subset of the same set i.e. RS(A)?
If that works for you as an explanation, then yes. (There are a lot of almost identical ways of explaining this, so provided you're confident about one of them, and that you could give more detail if needed, then that's fine).

Lastly, it says by definition Lin(B3) = RS(RRE(A). Is this because, the row space of RRE(A) is the linear combination of all rows of RRE(A) and B3 is the rows of RRE(A) which have leading 1s implying any other row in RRE(A) is a linear combination of the rows in B3 and therefore, the linear combinations of all of those rows i.e. RS(RRE(A)) is also equal to Lin(B3) from the same reasoning? Also, (simply because any rows in RRE(A) that are not in B3 are zero rows, so hence why Lin(B3) = RS(RRE(A)).
I'm not sure it's possible to answer this without exact definitions of the terms. (I don't think it would be "by definition" by the terms I used when I first did this, but it was over 30 years ago, so I don't recall exactly...)
0
#8
(Original post by DFranklin)
The solution set of a linear system (of the form Ax = b) is not a vector space unless b = 0, so talking about the basis of this set doesn't really make sense.

On the other hand, if you have two solutions Ax = b and Ay = b, then A(x-y) = 0, which means you can get from any solution to any other solution by adding a vector in the null space (i.e. a solution to Av = 0). Which is where you get the connection you're referring to.
Sorry, it's very hard to explain. What I meant is that:
Let's say I have a 3x5 matrix A, i.e. 3 rows and 5 columns. So, each vector in A belongs to R^3. Now when finding the column space, row space or the null space of A - these are all subspaces of the vector spaces: R^3, R^5 and R^5 respectively. So, e.g. N(A) i.e. null space of the matrix A refers to the set of solutions to the linear system Ax = 0. Hence, it is a flat going through the origin and can have its solutions represented through vector parametric form. Now, if it has free parameters i.e. A doesn't have full column rank, then are the direction vectors of the system (i.e. the vectors associated with free parameters in the general vector parametric solution always the basis vectors of the set representing N(A) i.e. null space of the matrix A.

So, what I mean: If the set we are talking about in question e.g. is the set of solutions to a linear system and its general solution can be represented in vector parametric form: are its basis vectors always the vectors associated with free parameters in its general solution in vector parametric form (i.e. the direction vectors of the flat).

This is just in general and not necessarily for any special subspaces like column space of A, row space of A etc.
For example, if we have the linear system Ax = b and it has vector parametric solutions e.g. (x y z) = (1 1 2)T + s*((1 3 1)T) + t*((2 5 7)T) where s, t are real numbers and all of those are column vectors i.e. (1 1 2) transpose.
Then, would the basis vectors for this subspace be the direction vectors i.e. {(1 3 1)T, (2 5 7)T}?
0
2 years ago
#9
(Original post by Chittesh14)
Sorry, it's very hard to explain. What I meant is that:
Let's say I have a 3x5 matrix A, i.e. 3 rows and 5 columns. So, each vector in A belongs to R^3. Now when finding the column space, row space or the null space of A - these are all subspaces of the vector spaces: R^3, R^5 and R^5 respectively. So, e.g. N(A) i.e. null space of the matrix A refers to the set of solutions to the linear system Ax = 0. Hence, it is a flat going through the origin and can have its solutions represented through vector parametric form. Now, if it has free parameters i.e. A doesn't have full column rank, then are the direction vectors of the system (i.e. the vectors associated with free parameters in the general vector parametric solution always the basis vectors of the set representing N(A) i.e. null space of the matrix A.
Yes; the second paragraph in the post of mine you're replying to explains why.

This is just in general and not necessarily for any special subspaces like column space of A, row space of A etc.
For example, if we have the linear system Ax = b and it has vector parametric solutions e.g. (x y z) = (1 1 2)T + s*((1 3 1)T) + t*((2 5 7)T) where s, t are real numbers and all of those are column vectors i.e. (1 1 2) transpose.
Then, would the basis vectors for this subspace be the direction vectors i.e. {(1 3 1)T, (2 5 7)T}?
Again, what subspace are you talking about? (The only logical one would be N(A), but you're not exactly being explicit).
0
#10
(Original post by DFranklin)
The solution set of a linear system (of the form Ax = b) is not a vector space unless b = 0, so talking about the basis of this set doesn't really make sense.

On the other hand, if you have two solutions Ax = b and Ay = b, then A(x-y) = 0, which means you can get from any solution to any other solution by adding a vector in the null space (i.e. a solution to Av = 0). Which is where you get the connection you're referring to.
Ah thank you, this makes much more sense . So the direction vectors of the system which are included in the VPE (vector parametric form of the general solution) are of course the vectors in the null space of the matrix A. These are the basis vectors, since you can go to any vector on this flat which represents the vector x as the solution of Ax = b (with more than 2 solutions i.e. the flat is at least 1 dimensional).

So just to clarify, basically, the flat corresponding to the linear system Ax = b is all the graph of all the vectors x for which that linear system is satisfied. The direction vectors of this flat in vector parametric equation are the vectors in the null space of A (i.e vectors associated with free parameters in VPE form) and these allow you to go from one solution to another e.g. from x to y to z etc... where (Ax=Ay=Az) = b.
In a similar way, the basis vectors of the set representing this flat i.e. {x | Ax = b} are the vectors {v1, v2, ...} such that they are linearly independent and span the set (x | Ax = b), so any vector in the set (i.e. any solution to the system Ax = b) can be reached through them
(So, they basically correspond to the direction vectors of the system as they take you to any solution x when the solutions are represented on a graph, and in sets: to any vector x satisfying Ax = b when taken linear combinations of those vectors, in the set {x | Ax = b}.

Sorry, find it hard to write clearly .
Last edited by Chittesh14; 2 years ago
0
#11
(Original post by DFranklin)
A linear combination of linear combinations is still a linear combination (at the end of the day, if you group everything by the row vectors , you've always got something of the form , so a linear combination).

The real thing to "worry" about is actually whether RS(RRE(A)) is strictly smaller than RS(A). (For example, we *could* reduce A to the zero-matrix by row operations, if we simply multiplied every row by 0, and then obviously RS(0) is smaller than RS(A)). But it turns out that this can't happen with elementary row operations, because every such operation is invertible.

If that works for you as an explanation, then yes. (There are a lot of almost identical ways of explaining this, so provided you're confident about one of them, and that you could give more detail if needed, then that's fine).

I'm not sure it's possible to answer this without exact definitions of the terms. (I don't think it would be "by definition" by the terms I used when I first did this, but it was over 30 years ago, so I don't recall exactly...)
I'm sorry, I didn't see this before my reply.
Oh right, why do I have to worry whether RS(RRE(A)) is strictly smaller than RS(A)? Is it because I'm trying to show they're equal? Thank you for the explanation of why it can't be strictly smaller, that was really nice didn't think of that.

Thanks, was just making sure I know the reasoning behind the explanation, else I wouldn't be able to prove it if I had to. Thank you for clarifying.

No problem, I'll probably have to ask the lecturer regarding the notes, as some stuff is hard to understand as it says by definition.
0
#12
(Original post by DFranklin)
Yes; the second paragraph in the post of mine you're replying to explains why.

Again, what subspace are you talking about? (The only logical one would be N(A), but you're not exactly being explicit).

For example, if we have the matrix A on the left hand side of (A|b) representing the matrix Ax = b i.e. the coefficient matrix of the linear system Ax = b. So as we have now confirmed. The solution set to this system Ax = b is a flat in R5. In fact, it is a 2 dimensional flat in R5 and it has direction vectors as shown, and these would be the basis vectors of the set: {x | Ax = b} ?

Also, I just meant regarding the matrix A. We have subspaces linked to A i.e. CS(A), RS(A) and N(A) which all go through the origin and they stand for: column space of A, row space of A and null space of A respectively.

attach]802740[/attach]

Now I am using a different matrix A. Here, the column space of A is shown. So, it is the set: a(c1) + b(c2) + ... + e(c5) where c1,...,c5 represent the 1st, ..., 5th columns of A respectively. Here the column space of A is a 2 dimensional subspace of R3.
So, it is the linear combination of all the columns of A.
Now, the vector parametric representation of A would be: the direction vectors of A (but they have to be linearly independent in VPE form, so they will be the linearly independent columns of A). Hence, can I say the basis vectors of A correspond to the direction vectors of A i.e. the linearly independent columns of the matrix A?
Last edited by Chittesh14; 2 years ago
0
#13
0
X

new posts
Back
to top
Latest
My Feed

### Oops, nobody has postedin the last few hours.

Why not re-start the conversation?

see more

### See more of what you like onThe Student Room

You can personalise what you see on TSR. Tell us a little about yourself to get started.

### Poll

Join the discussion

#### What uni admissions terms did you/do you find confusing?

14.3%
10.7%
Changed course offer (35)
4.35%
Clearing (58)
7.21%
Conditional offer (11)
1.37%
Deferral (33)
4.1%
Ucas Extra (94)
11.69%
First Generation or First in Family (25)
3.11%
Point of Entry (56)
6.97%
Self-release (116)
14.43%
Tariff Point (65)
8.08%
Unconditional offer (11)
1.37%
Withdrawal (23)
2.86%
I understood all of the language used in the application process (70)
8.71%
I found another term confusing (let us know in the thread!) (6)
0.75%