Home Physics What Are Tensors and Why Are They Utilized in Relativity?

What Are Tensors and Why Are They Utilized in Relativity?

What Are Tensors and Why Are They Utilized in Relativity?

[ad_1]

For those who attempt studying basic relativity, and generally particular relativity, by yourself, you’ll undoubtedly run into tensors. This text will define what tensors are and why they’re so essential in relativity. To maintain issues easy and visible, we’ll begin in two dimensions, and in the long run, generalize to extra dimensions.

COORDINATE FRAMES

The essence of relativity, and thus its use of tensors, lies in coordinate frames, or frames of reference. Take into account two coordinate frames, x and x’ (learn as x-prime). In two dimensions, these coordinate frames have the ##x_1## and ##x_{1prime}## axes as their respective horizontal axes, and the ##x_2## and ##x_{2prime}## axes as their respective vertical axes. That is proven within the determine under.

Now, we are able to describe an axis in a single coordinate body when it comes to one other coordinate body. Within the determine above, we are able to outline the amount ##frac{partial x_1}{partial x_{1prime}}##, which is the ratio of the quantity the ##x_1## axis modifications within the x’ body of reference as you modify ##x_{1prime}## an infinitesimally small quantity to the infinitesimally small change you made to ##x_{1prime}## (this isn’t a “regular” partial spinoff within the sense that the axis of which you take the partial spinoff shouldn’t be a operate. It’s higher to only consider it as the speed of change of an axis in a single body of reference with respect to a different axis in one other body of reference). That is basically the speed of change of ##x_1## with respect to ##x_{1prime}##. If ##x_1## have been at a forty five diploma angle to ##x_{1prime}##, utilizing Pythagorean Theorem we all know that this fee of change is ##sqrt{2}##. This fee of change expression, and its reciprocal, will likely be helpful when altering frames of reference. Notice that for a lot of instances, it’s important to do not forget that the ##partial## refers to an infinitesimally small change in one thing, as axes won’t at all times be simply straight traces with respect to a different reference body.

When working in numerous coordinate frames, as we do all through physics, we have to use portions that may be expressed in any coordinate body– it’s because any equation utilizing these portions is identical in any coordinate body. These portions occur to be tensors. However, that definition of tensors additionally suits abnormal numbers (scalars) and vectors. Actually, these, too are tensors. Scalars are referred to as rank-0 tensors, and vectors are referred to as rank-1 tensors. For probably the most half, when coping with inertial (non-accelerating) coordinates, vectors and scalars work simply positive. Nonetheless, while you begin accelerating and curving spacetime, you want one thing else to maintain observe of that additional info. That’s the place higher-rank tensors come into play, specifically, however not completely, rank-2 tensors.

VECTORS

Step one in actually understanding the overall concept of a tensor is by understanding vectors. A vector is outlined as one thing with each a magnitude and course, reminiscent of velocity or pressure. These vectors are sometimes represented as a line section with an arrowhead on one aspect. As proven on the correct, a vector ##V_{mu}## has elements within the ##x_1## and ##x_2## instructions, ##V_1## and ##V_2##, respectively (observe that these are simply numbers, not vectors, as a result of we already know the element vector’s course). We are able to specific ##V_{mu}## when it comes to its elements utilizing row vector notation, the place $$V_{mu}=start{pmatrix}V_1 & V_2end{pmatrix}$$or utilizing column vector notation, the place $$V_{mu}=start{pmatrix}V_1V_2end{pmatrix}$$Notice that we’re not going to be so involved with the place of a vector; we’re simply going to give attention to its illustration as a row or column vector.

 

TENSORS

We are able to now generalize vectors to tensors. Particularly, we will likely be utilizing rank-2 tensors (keep in mind, scalars are rank-0 tensors and vectors are rank-1 tensors, however once I say tensor, I’m referring to rank-2 tensors). These tensors are represented by a matrix, which, in 2 dimensions, can be a 2×2 matrix. Now, we should use 2 subscripts, the primary representing the row and the second representing the column. In 2 dimensions, $$T_{mu nu}=start{pmatrix}T_{11} & T_{12} T_{21} & T_{22}finish{pmatrix}$$Notice that every element of vectors and tensors are scalars.

You will have seen that the rank of a tensor is proven by the variety of indices on it. A scalar, S, has no indices (as there is just one element and thus no rows or columns) and is thus a rank-0 tensor; a vector, ##V_{mu}##, has 1 index and is thus a rank-1 tensor, and a rank-2 tensor, ##T_{munu}##, has 2 indices as it’s a rank-2 tensor. It’s also possible to see how vector/tensor elements are scalars; though they’ve indices, they don’t have any indices that may tackle totally different values.

THE METRIC TENSOR

There’s a particular tensor used typically in relativity referred to as the metric tensor, represented by ##g_{mu nu}##. This rank-2 tensor basically describes the coordinate system used, and can be utilized to find out the curvature of house. For Cartesian coordinates in 2 dimensions, or a coordinate body with straight and perpendicular axes, $$g_{mu nu}=start{pmatrix}1 & 0 0 & 1end{pmatrix}$$which is known as the id matrix. If we change to a special coordinate system, the metric tensor will likely be totally different. For instance, for polar coordinates, ##(r,theta)##, $$g_{mu nu}=start{pmatrix}1 & 0 0 & r^2end{pmatrix}$$

DOT PRODUCTS AND THE SUMMATION CONVENTION

Take two vectors, ##V^{mu}## and ##U^{nu}## (the place ##mu## and ##nu## are superscripts, not exponents. Identical to subscripts, these superscripts are indices on the vectors). We are able to take the dot product of those two vectors, by taking the sum of the merchandise of corresponding vector elements. That’s, in 2 dimensions, $$V^{mu} cdot U^{nu}=V^1U^1+V^2U^2$$It’s clear to see how this may be generalized to increased dimensions. Nonetheless, this solely works in Cartesian coordinates. To generalize this to any coordinate system, we are saying that $$V^{mu} cdot U^{nu}=g_{mu nu}V^{mu}U^{nu}$$To judge this, we use the Einstein summation conference, the place you sum over all indices that seem in each the highest and backside (on this case each ##mu## and ##nu##). In two dimensions that will imply $$V^{mu} cdot U^{nu}=g_{11}V^{1}U^{1}+g_{21}V^{2}U^{1}+g_{22}V^{2}U^{2}+g_{12}V^{1}U^{2}$$the place now we have summed up phrases with each attainable mixture of the summed up indices (in 2 dimensions). This summation conference basically cancels out a superscript-subscript pair. You’ll be able to see that as a result of we’re solely coping with tensor and vector elements, that are simply scalars, the dot product of two vectors is a scalar. In Cartesian coordinates, as a result of ##g_{12}## and ##g_{21}## are zero and ##g_{11}## and ##g_{22}## are one, we get our unique definition of the dot product in Cartesian coordinates.

We are able to additionally use this conference inside the similar tensor. For instance, there’s a tensor used basically relativity referred to as the Riemann tensor, ##R^{rho}_{mu rho nu}##, a rank-4 tensor (because it has 4 indices), that may be contracted to ##R_{mu nu}##, a rank-2 tensor, by summing over the ##rho## which reveals up as each a superscript and a subscript. That’s, in 2 dimensions, $$R^{rho}_{mu rho nu}equiv R^1_{mu 1 nu}+R^2_{mu 2 nu}=R_{mu nu}$$

VECTOR AND TENSOR TRANSFORMATIONS

Now that you understand what a tensor is, the massive query is: why are they utilized in relativity? It is because, by definition, tensors could be expressed in any body of reference. Due to this, any equations utilizing simply tensors maintain up in any body of reference. Relativity is all about physics in numerous frames of reference, and thus you will need to use tensors, which could be expressed in any body of reference. However, how can we remodel a vector from one body of reference to a different? We won’t be specializing in reworking scalars between frames of reference. As an alternative, we’ll give attention to reworking vectors and rank-2 tensors. Utilizing this, we are able to additionally uncover a distinction between covariant (indices as subscripts) and contravariant (indices as superscripts) tensors.

First, we are able to remodel the vector ##V_{mu}## into the vector ##V_{muprime}##. Notice that these are the identical vector, only one is within the “unprimed” coordinate system and one is within the “primed” coordinate system. If a dimension within the primed body of reference modifications with respect to a dimension within the unprimed body of reference, the vector elements within the two frames of reference for these dimensions needs to be associated. Or, if the ##x_{1prime}## dimension could be described within the x body of reference by the ##x_1## and ##x_2## dimensions, then ##V_{1prime}## (the vector element of the ##x_{1prime}## axis) needs to be described by the connection between the ##x_{1prime}## dimension and the ##x_1## and ##x_2## dimensions, in addition to the vector elements of these dimensions (##V_1## and ##V_2##). So, our transformation equations should remodel every element of ##V_{muprime}## by expressing it when it comes to all dimensions and vector elements within the unprimed body of reference (as any dimension within the primed body of reference can have a relationship with all the unprimed dimension, which we should account for). Thus, we sum over all unprimed indices, so for every element of ##V_{muprime}## now we have a time period for every vector element within the unprimed body of reference and their axis’s relationship to the ##x_{muprime}## axis. The covariant vector transformation is thus $$V_{muprime}=( partial x_{mu} / partial x_{muprime})V_{mu}$$and summing over the unprimed indices in 2 dimensions, we get $$V_{muprime}=( partial x_{mu} / partial x_{muprime})V_{mu}=( partial x_{1} / partial x_{muprime})V_{1}+( partial x_{2} / partial x_{muprime})V_{2}$$To alter from ##V^{mu}## to ##V^{muprime}## we use an identical equation, however we really use the reciprocal of the spinoff time period used within the covariant transformation. That’s, $$V^{muprime}=( partial x_{muprime} / partial x_{mu})V^{mu}$$and the unprimed indices are summed over as they have been earlier than. That is the contravariant vector transformation.

To remodel rank-2 tensors, we do one thing related, however we should adapt to having 2 indices. For a covariant tensor, to vary from ##T_{munu}## to ##T_{muprimenuprime}##, $$T_{muprimenuprime}=(partial x_{mu}/partial x_{muprime})(partial x_{nu}/partial x_{nuprime})T_{munu}$$and summing over the unprimed indices in two dimensions, we get $$T_{muprimenuprime}=(partial x_{mu}/partial x_{muprime})(partial x_{nu}/partial x_{nuprime})T_{munu}=(partial x_{1}/partial x_{muprime})(partial x_{1}/partial x_{nuprime})T_{11}+(partial x_{1}/partial x_{muprime})(partial x_{2}/partial x_{nuprime})T_{12}+(partial x_{2}/partial x_{muprime})(partial x_{1}/partial x_{nuprime})T_{21}+(partial x_{2}/partial x_{muprime})(partial x_{2}/partial x_{nuprime})T_{22}$$For contravariant tensors, we as soon as once more use the reciprocal of the spinoff time period. To alter from ##T^{munu}## to ##T^{muprimenuprime}##, we use $$T^{muprimenuprime}=(partial x_{muprime}/partial x_{mu})(partial x_{nuprime}/partial x_{nu})T^{munu}$$the place the indices are summed over the identical method as earlier than.

Although the mechanics of those transformations are pretty sophisticated, the essential half is that we are able to have transformation equations, and thus tensors are essential to physics the place we have to work in numerous coordinate frames.

GENERALIZING

We’ve to this point been doing the whole lot in 2 dimensions, however, as you understand, the world doesn’t work in two dimensions. It’s pretty straightforward, although, to rework what we all know in two dimensions into many dimensions.  Initially, vectors will not be restricted to 2 elements; in 3 dimensions they’ve 3 elements, in 4 dimensions they’ve 4 elements, and many others. Equally, rank-2 tensors will not be at all times 2×2 matrices. In n dimensions, a rank-2 tensor is an nxn matrix. In every of those instances, the variety of dimensions are basically the variety of totally different values the indices on vectors and tensors can tackle (i.e. in 2 dimensions the indices could be 1 or 2, in 3 dimensions the indices could be 1, 2 or 3, and many others.). Notice that always for 4 dimensions, the indices can tackle the values 0, 1, 2, or 3, however it’s nonetheless 4 dimensional as a result of there are 4 attainable values the indices can tackle, though a type of values is 0. It’s purely a matter of conference.

We should additionally generalize the summation conference to extra dimensions. When summing over an index, you simply want to recollect to sum over all attainable values of that index. For instance, if summing over each ##mu## and ##nu## in 3 dimensions, you will need to embrace phrases for ##mu=1## and ##nu=1##; ##mu=2## and ##nu=1##; ##mu=3## and ##nu=1##; ##mu=2## and ##nu=1##; ##mu=2## and ##nu=2##; ##mu=2## and ##nu=3##; ##mu=3## and ##nu=1##; ##mu=3## and ##nu=2##; and ##mu=3## and ##nu=3##. Or, within the instance with the Riemann tensor, in r dimensions the equation can be $$R^{rho}_{mu rho nu}equiv R^1_{mu 1 nu}+R^2_{mu 2 nu}+R^3_{mu 3 nu}+…+R^r_{mu r nu}=R_{mu nu}$$

Lastly, we are able to additionally generalize the transformation equations, not solely to extra dimensions however to totally different ranks of tensors. When reworking a tensor, you simply want spinoff phrases for every index. It’s also possible to remodel tensors with each covariant and contravariant indices. For instance, for a tensor with one contravariant and two covariant indices, the transformation equation is $$T_{muprimenuprime}^{xiprime}=(partial x_{mu}/partial x_{muprime})(partial x_{nu}/partial x_{nuprime})(partial x_{xiprime}/partial x_{xi})T_{munu}^{xi}$$the place, as you may see, the contravariant transformation spinoff (with the primed coordinate on high) is used for ##xi## and the covariant transformation spinoff (with the unprimed time period on high) is used for ##mu## and ##nu##. The unprimed indices are summed over as per traditional.

[ad_2]

LEAVE A REPLY

Please enter your comment!
Please enter your name here