Proving the Rule for Linear Transformations and Understanding (T^T) in Vector Spaces

Proving the Rule for Linear Transformations and Understanding (T^T) in Vector Spaces

In linear algebra, a linear transformation (T) between two vector spaces (V) and (W) can be represented by a matrix. However, understanding the relationship between the linear transformation (T) and its transpose (T^T) is crucial for advanced applications. This article will walk you through the steps to prove the rule for (T^T) in the context of a vector space and its dual.

Introduction to Linear Transformations and Dual Bases

A linear transformation (T: V rightarrow W) is a function that preserves the operations of vector addition and scalar multiplication. In other words, for any vectors (alpha, beta in V) and any scalars (a, b), the following properties hold:[ T(aalpha bbeta) aT(alpha) bT(beta). ]Commonly, we denote the linear transformation (T) in terms of a matrix when we have a basis for both the domain and the codomain. Specifically, if ([id]_{B C}) represents the matrix that transforms the basis vectors of (B) to the basis vectors of (C), where (B) is a basis for (V) and (C) is a basis for (W), then (T) can be represented as:[ [T]_{B C} [id]_{C B} cdot [T]_{B B}. ]

The Meaning of (T^T)

The adjoint (or transpose) of a linear transformation (T^T) is defined in terms of the duality between a vector space (V) and its dual space (V^*). The dual space (V^*) consists of all linear functionals (linear maps from (V) to the field of scalars, typically (mathbb{R}) or (mathbb{C})).Given (T: V rightarrow W), the adjoint (T^T: W^* rightarrow V^*) is defined such that for all (alpha in V^*) and (beta in W^*), the following relationship holds:[ alpha circ T T^T circ beta. ]

**Note:** The notation ( alpha circ T ) means that (alpha) is applied to (T(beta)) for any (beta in W), and (T^T circ beta) means that (T^T) is applied to (beta) first, followed by (alpha).

Proving ( T^T [id]_{C^, B^} )

To prove the statement ( T^T [id]_{C^, B^} ), we need to understand the properties of the dual bases and how they interact with (T).Let ( { b_1, b_2, ldots, b_n } ) be a basis for ( V ) and ( { c_1, c_2, ldots, c_n } ) be a basis for ( W ). The dual basis ( B^ ) for ( V^* ) is defined such that ( b_i^*(b_j) delta_{ij} ), where ( delta_{ij} ) is the Kronecker delta (it equals 1 if ( i j ) and 0 otherwise).Similarly, the dual basis ( C^ ) for ( W^* ) is defined with ( c_i^*(c_j) delta_{ij} ).Now, consider the application of (T^T) to a dual basis vector (c_i^*):[ T^T(c_i^*)(b_j) c_i^*(T(b_j)). ]Using the properties of (T^T), we can rewrite this as:[ c_i^*(T(b_j)) T(b_j)(c_i) c_i(T(b_j)). ]Given that ( { c_1, c_2, ldots, c_n } ) is a basis and (c_i(T(b_j)) delta_{ij}), we see that ( T(b_j) ) must be such that it maps (b_j) to (c_j) according to the dual basis. Therefore, we have:[ T^T(c_i^*) b_i^*. ]This shows that ( T^T ) maps each dual basis vector (c_i^*) to the corresponding dual basis vector (b_i^*). In matrix form, this corresponds to the identity matrix ( [id]_{C^, B^} ).

Conclusion

In summary, the adjoint (transpose) of a linear transformation (T), (T^T), can be understood as a mapping between the dual bases of the vector spaces involved. Specifically, ( T^T [id]_{C^, B^} ) is a concise and powerful statement that simplifies many advanced topics in linear algebra and functional analysis.

Keywords

- linear transformation- vector spaces- dual basis