Tensors (cont.) - Part 1

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

PROFESSOR: All right, an announcement that I must make, after developing all these friendships and working together, we will, next Tuesday, once again, enter a brief adversarial relationship. And we will have quiz number two.

What we will cover on that quiz will be about 50%, perhaps a little bit more, on symmetry. And I'll remind you in just a minute what we've covered since the last quiz. And the second half or 40% of the quiz will be on the formalism of tensors and tensor manipulation. And obviously, you've had a chance only to do a very few problems. So these will be pretty basic questions and nothing tricky or very difficult.

So let me remind you very, very quickly just by topics what we've covered. We began the period following the first quiz discussing how we can take the plane groups, two-dimensional symmetries that are translationally periodic, and defining a third translation to stack up the plane groups to give us a three-dimensional periodic arrangement of symmetry elements.

In doing so, we found that we had to pick that third translation such that the symmetry elements that were in the plane group, and those were the things that made the lattices of the plane groups special, made them square or hexagonal or rectangular, that was true because there was symmetry there.

Once again, symmetry and the specialness of a lattice are two inseparable parts of the story. But if we did this, the symmetry elements had to coincide. Otherwise, we would reproduce new symmetry elements in the base of the cell and wreck everything.

So when we did that, there were a very limited number of ways of doing it. In the course of doing that, we fell headlong over screw axis in the same way that we fell over glide planes as soon as we put mirror lines in a two-dimensional lattice. The interesting thing was that that exercise of stacking the two-dimensional plane groups gave us almost all of the lattice types that exist in three dimensions.

And the reason for that is we showed very directly that inversion requires nothing. So therefore, any of the point groups that were used in deriving the plane groups, when augmented by inversion to give a three-dimensional point group, are going to require nothing more than the lattices that we derived by stacking the plane groups.

And that took care of just about everything except for the cubic symmetries. And we disposed of those in very short order. So that led us to the 14 space lattices, so-called bravais lattices. And then we proceeded to work with those lattices and put in the point groups that could accommodate them.

Having discovered glide planes, we then, if we had done the process systematically, would have replaced mirror planes by glide planes. And having discovered screw axes, we replace pure rotation by screw axis rotations. And then doing what we did but didn't have to do very often, mercifully, for the plane groups, we said one could also interweave symmetry elements. And that gave a few additional space groups.

We did not derive very many of them. What is important at this stage is simply to realize that their properties and applications are very similar to what we discovered for the two-dimensional plane groups. The notation is a little bit different.

Because there can be symmetry elements parallel to the plane of the paper so that in depicting the arrangement of symmetry elements you need a little [? chevron-type ?] device to indicate whether you've got a mirror plane or a glide plane in the base of the cell. And similarly with two-fold axes, and that's about the only thing that would be in the plane of the paper if the principal symmetry is normal to the paper, we need a device for indicating the elevation and whether the axis is a two-fold rotation axis or a screw.

So there's a geometrical language for describing the arrangement of the space groups. We had a few problems in determining space group symbols and interpreting what these mean. But again, I don't think I would be sadistic enough to ask you to derive a complicated three-dimensional space group. I don't know. Maybe over the weekend, if I get to feeling mean, I might ask you something in those directions. But it will not be anything terribly formidable.

Then finally, we looked at the way in which space groups can be used to describe three-dimensional periodic arrays of atoms. And that again was strictly analogous to what we did in two dimensions.

The description of the three-dimensional arrangement of atoms, particularly if it is too complicated to be readily appreciated in a projection, would have to be given in terms of the space group, the type of position that is occupied, either special or the general position, and then depending on how many degrees of freedom are available in those positions to give you the coordinates of the element or just the x-coordinate or the x and y-coordinate.

And then you run to the international tables. You don't have to work this out yourself. And you find the list of coordinates that are related by symmetry for that particular special position and then plug in x and y to get all the coordinates of the atoms in the unit cell.

To go to the practical aspects of this, usually only simple structures can be projected with any clarity that lets them be interpreted. And even with a computer these days one can show an orthographic projection of the structure and then in real time manipulate it so you can see this beautiful construction with colored shiny spheres in it rotating around.

But if it's a complicated structure, even that doesn't do you very much. It works for NaCO and zinc sulfide. But you know what those look like. And so you don't really need this very attractive, real time interaction.

What I found works well is to take the structure and slice it apart in layers like a layer cake and look at a limited range of the one translation, generally, one of the shorter translations, and examine the coordination of atoms within that layer. And so by slicing the structure away and looking at it in several layers that are stacked on top of one another, the way I work between the ears, usually lets me gain an appreciation of what's going on in that structure.

Something different might work for you. Matter of fact, you might be a gifted person who can see the things spinning around on a computer screen and understand exactly what it is. But everybody's mind works in a slightly different way.

OK, that is roughly what we'll be covering on symmetry. Then we got into the connection between symmetry and physical properties. And we defined what we meant by a tensor. A tensor of the second rank is something that relates to vectors.

We got into a notation involving the assumption that any repeated subscript, and there's subscripts all over the place in tensor properties. Any repeated subscript is automatically implied to be summed over from 1 to 3. We then derived the laws for transformation of a tensor.

Namely that, if you specify the change of coordinate system by a direction cosine scheme, cij, where cij is the cosine of the angle between x of i prime and x sub j, that, since these direction cosines constitute unit vectors, there had to be relations between them. Namely the sum of the squares of the terms in any row or column had to be unity. The sum of corresponding terms in any row or column had to add to 0. Because these represented dot products of unit vectors in a Cartesian coordinate system.

And then using this array of direction cosines, we found that the way in which a second ranked tensor transformed was that each tensor element was given by a linear combination of all of the elements of the original tensor. And out in front of the tensor was a product of two direction cosines, the subscripts of which were determined by the indices of that element.

A vector could be regarded as a first ranked tensor. And everybody understands the law for transformation of a vector. Namely that each component of the vector is a linear combination of every component of the original vector with one direction cosine out in front.

Tensor of second rank has a product of two direction cosines summed over all the elements of the original tensor. Third ranked tensors, which, believe it or not, we'll get into after the quiz in considerable detail, and even shudder fourth ranked tensors are going to involve summations that involve scads of terms and products of three and four direction cosines respectively.

So we won't get very far into tensors. We will perhaps have a couple of questions on the nature of anisotropy that's implied by these relations. And I think, if you looked at the problem sets to this point, that's as far as we'll go on the quiz. And we will not throw something at you that you've not seen before. So it'll be merciful.

We'll pull out all the stops in the last quiz. Because then you'll be leaving. And you can hate me. But I never have to deal with you again. So that will not make any difference.

All right, I have handed back all of the problem sets except two, which are partly done. I never again will ask anybody to draw out patterns of rotoinversion and rotoreflection axes. Because they go on and on and on.

And I am so tired of staring at little circles and trying to tell which ones are right and which one are wrong. But that's something else we have in common. You probably hated looking at them when you were drawing them out for yourself.

So let me tell you what's going to go on for the next several days. Tomorrow is a holiday, not for instructors who are making up a quiz, however. So I will be in my office from early morning on for the rest of the day.

So if you want to come by to ask questions, what I will be doing primarily is finishing up grading the problem sets. And if you want to pick up your problem sets and ask a question about those, I am there at your disposal hopefully with no ringing phone calls and no committee meetings and nothing of that sort.

If I have finished the problem sets and you haven't gotten yours yet, what I will do is tape an envelope to my door and have them tucked in there. So you can come by any time over the weekend and pick them up if you want to get hold of them and look them over at your leisure.

I have to tell you, looking ahead in the schedule, I, unfortunately, bad timing, I have to be away on Monday. So if you have a last minute question or you need to be calmed down from a last minute panic, you can catch me Tuesday morning. I'll be in all of the earlier part of the day.

But if you do have questions, please come in on Friday. Because I'll have lots of time to spend with you. So your problem sets, anything turned in to this point and turned in today, should be on my door, if you haven't called for it no, later than the beginning of the week.

OK, any questions? Any puzzles that you want to go over before we launch into new material?

OK, what we have done with second ranked tensor properties is to derive a whole set of symmetry constraints on something aij that represents a physical property of a crystal. And one of the things that we've seen, let's look at conductivity once more since that's something we're all familiar with and can readily appreciate, that the components of the current density vector depend on all of the components of the applied electric field, voltage per unit length.

And in as much as for anything other than a crystal of cubic symmetry, the numbers in this array, at least some of them, are different. And that gives two surprising results. One is that the direction of J relative to the applied electric field will change as you apply the field in different directions in the crystal.

And moreover, the magnitude of the current is going to change as you change the orientation of field. That's true of every crystal in a crystal system other than cubic.

So the thing that I would like to address today is the interesting matter of what is the nature of this anisotropy and if a consequence of a second ranked tensor relation is that, if we apply an electric field, the current doesn't go in the same direction as the applied field. It runs off in some other direction as non-intuitive as that might seem.

First of all, there are two directions. If we talk about a property varying with direction, which direction are we talking about, the direction of the applied field, the direction of the resulting current flux, or something halfway in between, split the difference? Well, we can answer that question by doing a couple of thought experiments.

Suppose we had decided that we want to measure the electrical conductivity along the 1, 0, 0 direction of an orthorhombic crystal. What would we do? Well, we get a hold of that crystal. And we would cut surfaces on it such that this was the direction of 1, 0, 0, if that's what we wanted to do. We would glom on electrodes and put a voltage across this crystal, this chunk of crystal that we had cut out.

So when we say we're going to measure the property along 1, 0, 0 or 1, 1, 1, what we really mean is that we're going to cut the crystal normal to that direction and then apply some sort of electrodes or attach some sort of probe. So when we talk about the direction of the property, the direction of the property will be the direction of the applied vector.

And we had a very fancy name for the applied vector. We said we could think of that as a generalized force. And what happens is a generalized displacement.

Notice that, when we apply the field, there are all sorts of components to J. And so the direction of the property, although we would get a number out of this experiment, the property really is a tensor. And there are nine different quantities involved in it.

OK, so the direction of the property is going to be the direction in which we apply the generalized force of the stimulus. When we do that, however, that is going to unequivocally, at least in this experiment, place the electric field in a direction along the crystallographic orientation of interest. Because the electric field in a pair of electrodes like this extends normal to those electrodes.

The current flow, the charge per unit area per unit time, goes off in some direction like this. Well, conductivity relates current flow to applied electric field. So would we say that the conductivity, when we apply field in that direction, is simply J/E, magnitude of J over magnitude of E since these are both vectors question mark?

The answer to that, if we do this in terms of a thought experiment, is no. That is not what we're measuring. What we can imagine is that between our electrodes is some window of unit area between those electrodes.

And what we're going to measure as the current flow is the amount of charge per unit area per unit time that passes between the electrodes, namely the charge per unit area per unit time that gets through this window and reaches the other electrode. That's what we're going to measure.

And what we're going to want to call the conductivity is the ratio of this part of the current flux, let me call that J parallel, over the magnitude of E. And that's what we'll mean by the conductivity in the direction in which we've applied the field.

The actual flow of current goes off in some other direction. But what we measure, at least in this experiment, is the component of that charge flux that goes through a window, that pair of unit area, that's parallel to the electrode.

And we don't really care if the current is going off in a different direction. We don't even care if it jumps around in majestic loop-de-loops. What we're going to measure is the amount of that charge motion that moves normal to our window and gets through one unit area from one electrode to another.

So the conductivity and the direction of E then is going to be measured by the parallel part of the resulting generalized displacement divided by the magnitude of the applied generalized force.

If you're not convinced and completely overwhelmed by that, let me give you another experiment. Another tensor relation, another tensor property, is the diffusivity of a material. And if you put a material in a concentration gradient dc dx, you produce a flux of matter. And the proportionality constant is the diffusion coefficient. And this has units of length per unit time if you put in the units of the concentration gradient and the flux.

Well, dc dx is a gradient. That's a vector really. So a proper tensor relation, instead of a scalar relation like this, would be the change of concentration with the i-th coordinate. And out in front would be a diffusion coefficient Dij. And this tensor relation would give us the components of the mass flux J1 J2 J3.

Actually, thermodynamically we shouldn't be talking about a concentration gradient. We should be talking about a gradient in free energy. The concentration works well enough. And the concentration is what one would measure.

So what might a typical experiment be? If you wanted to know what the diffusion was in the 1, 1, 1 direction of some crystal, you would, again, cut a plate that is normal to the direction of interest. And then your boundary conditions can be different. But what you would typically do for a simple experiment is to put some sort of solute on the surface of the crystal.

And what you would see after heating the sample up for a period of time is that the material would have migrated into the sample. And if you plotted concentration as a function of distance along the normal to the plate that you've cut, this would do something like that.

What can you say about this process after you've done the experiment? All that you can say is that some front of concentration has advanced parallel to the surface. And you have no idea what the individual atoms have done. Then they could be doing loop-de-loops or little hip hops. And all that you measure is the rate at which material advances in the direction of the current gradient.

So again, I think I've convinced you, hopefully, that what you're measuring is the flux of matter unit area per unit time that is in the direction of the applied concentration gradient. And you do not measure, and in fact, in this instance, I don't think I could come up with even a very clever experiment that would let you measure in a single experiment the magnitude and direction of the net flux.

All right, so if we're measuring a property then like conductivity, the value of the conductivity in the direction of the applied field is going to be the part of the conductivity that is parallel to the field divided by the magnitude of the field. We all agree on that. Let's have a show of hands. Fine. I saw just a few tired hands saying, OK, I believe, I believe, let's get on with it.

All right, let me now put this using our tensor relationship into a nice analytic form that we can not only do something with but can gain some insight. The components of the charge density vector are given by the conductivity tensor sigma 1, 1 times the component of the electric field E1 plus sigma 1, 2 times the component of field E2 plus sigma 1, 3 times E3.

Or in general, I don't have to write out every equation. We can say that the i-th component of J, our old friend the second ranked tensor property, again, is given by sigma ij times each component of applied field.

If we specify the direction relative to the same coordinate system in which the electric field is being applied, we can specify that direction by a set of three direction cosines, l1, l2, and l3. So we can write as the three components of the electric field E sub i as the magnitude of E times the direction cosines l1, l2, and l3. So here we now have the components of the electric field.

If these are the components of the electric field, we can say that J sub i is going to be equal to sigma ij times E sub j. And I can write for E sub j the magnitude of E times the direction cosines l sub j where these are the same direction cosines of the applied field.

OK, now we have two relations in reduced subscript form that we can do something with. The magnitude of the conductivity in the direction of E, that is in the direction that has direction cosines li, is going to be J parallel over the magnitude of E. We can write that as J.E, except that involves the magnitude of J, the magnitude of E times the cosine of the angle between them.

And so if I just want J parallel, I'd want to divide this by the magnitude of E. And that will give me just the part of J that falls parallel to E. And then down here in the denominator, I'll have magnitude of E.

So if I tidy this up a little bit to bring everything on the same level, we can write the dot product of J and E as Ji Ei, like J1 E1 plus J2 E2 plus J3 E3. And then I will have in the denominator magnitude of E squared.

But we know what Ei is. It's going to be magnitude of E times li. I know how to find the i-th component of J. That's going to be sigma ij times E sub j over magnitude of E squared. And I can write E sub j as magnitude of E times l sub j times l sub i times magnitude of E divided by magnitude of E squared.

Magnitude of E drops out. Again, the conductivity is linear the way we've defined it. It shouldn't depend on the magnitude of E. And what I'm left with then is a very simple and a rather profound relation. It says that in a direction specified by the set of direction cosines l sub i that the magnitude of conductivity in that direction is going to be li lj times sigma ij.

So that is an expression that involves the three direction cosines in which you're applying the field and measuring the property. But this is a linear combination of every one of the elements in the original tensor.

So if I write this out, because this is an important relation, it's going to be l1 squared sigma 1, 1 plus l2 squared sigma 2, 2 plus l3 squared sigma 3, 3. And then there will be cross-terms of the form 2 l1 l2 sigma 1, 2 plus 2 l2 l3 times sigma 2, 3 plus 2 l3 l1 times sigma 1, 3.

That has made an assumption. And probably I should not have done it at this point. That's assuming that the term l1 l2 sigma 1, 2 and the term l2 l1 sigma 2, 1 can be lumped together. And maybe I shouldn't write it that way. Because that assumes that the tensor is symmetric. And it doesn't have to be for some properties.

So let me back off and write it without any assumptions. So this would be sigma 1, 2 l1 l2 plus sigma 1, 3 l1 l3 plus sigma 2, 1 times l2 l1 and then corresponding terms-- let me put this one down here-- sigma 2, 1 times l2 l1 plus sigma 3, 1 l3 l1. And then the one that I'm missing is sigma 2, 3 times l2 l3 plus sigma 3, 2 times l2 l3. So there is an explicit statement of this more compact expression written in reduced subscript notation.

OK, so this is something that is going to tell us in a very neat sort of way what the anisotropy of the property is. You pick the direction in which you're interested. And this will tell you the value of the property in that direction and, therefore, by inference how things will change with direction.

Let me quickly, and then I'll pause to see if you have any questions, give you an interpretation of the meaning of some of these elements of the tensor. We've really not been able to do that. We've been able to use them to give the value of the property in a given direction but not their individual meaning.

Suppose I ask what the value of the property would be along the x1 direction. So suppose we ask what value of the conductivity would we measure along x1 for a given tensor sigma ij. If that is the case, l1 is equal to 1. The angle between the x1 axis and x2 is 0. So l2 is 0. 90 degrees cosine of 90 degrees is zero. l3 would be equal to 0.

So if I put this set of direction cosines into this complicated polynomial, the term l1 squared sigma 1, 1 stays in. But l2 is 0. l3 is 0. And I've got either an l2 or an l3 or both in all of these other terms. So along the x1 direction, the only term that stays in is sigma 1, 1 squared times l1 squared.

But l1 is unity. So along the x1 direction the value of sigma is sigma 1, 1. And by inference the value of the property that we would measure along x2 would be sigma 2, 2. And the value of the property along x3, guess what, sigma 3, 3.

So in this three by three array of numbers for sigma ij, there's a very direct meaning to the diagonal values of the tensor. These are the values that you would measure along x1, along x2, and along x3 directly. So the off diagonal terms are going to be saying something about how the extreme values of the property are aligned relative to our reference axes. OK, any comment or questions at this point? Yeah, [INAUDIBLE]?

AUDIENCE: [INAUDIBLE].

PROFESSOR: Yeah, we've defined those as the cosines of the angles between the reference axes. And this is the direction in which we're applying our electric field or our generalized forced temperature gradient, concentration gradient, magnetic field, or what have you.

OK, I've got some time left. So let me push this further into a different form by using the elements of the tensor to define a locus in space. Again, what I'll do is something that may seem silly. But we'll see that there are some very useful and profound conclusions to be drawn from it.

Let me take the elements of the tensor and use them as coefficients to define a surface. I'm going to take sigma 1, 1 and multiply it by the product of the coordinate x1 times x1. What I'm viewing this now is a variable. I'm going to take sigma 2, 2 and multiply it by the coordinate x2 and x2 again. Sigma 3, 3 and I multiply that by x3 times x3.

And then I'll have these cross-terms, sigma 1, 2, x1, x2, and so on. Or in short, I'm going to define a function sigma ij xi xj. And xi and xj are running variables. x1, x2, x3, they can extend from 0 to infinity.

But now I'm going to take this sum of nine terms and I'm going to set it equal to a constant. And what constant could be neater and cleaner and more abstract than unity? But I could say that some function of x1, x2, x3 equal to a constant is going to define a locus of points in space, which satisfy that equation.

And we do that all the time. We specify some functional relationship that defines in the space some surface f of xi xj equals a constant. And that's done all time in mathematics. I've yet to encounter yet a department of mathematics that did not have in its corridors some glass case that was filled with yellowing plaster figures, that some of them cracking or already cracked, that represented exotic surfaces, like elliptic paraboloids and things of that sort.

I think the last time I walked down the corridor of building two there really was one of those in our mathematics department in the little door that opened out onto the great court about halfway down that corridor. But you've seen these things I'm sure, elliptic cylinders, elliptic cones, and all sorts of glorious things.

This function, though, is a very special one. This is a quadratic equation in the second order of coordinates. And this is referred to as a quadratic form. And sometimes for short, since that's something of a mouthful, one refers to this as a quadric.

And we are going to very shortly demonstrate that this particular function tells you everything you could possibly want to know about the variation of a tensor property with direction, how the magnitude of the property changes, how the direction of the resulting current flow or flux is oriented.

And this is, consequently, called, when applied to a tensor, the representation quadric. Because it really does represent anything you would like to know about the physical property. This will be demonstrated in short order.

Let's first notice, though, that there's only a limited number of surfaces that can be represented by this equation. One of them is one that we're all familiar with, an ellipsoid. And that's most easily represented when we've referred it to the principal axes of the ellipsoid.

So an ellipsoid is a quasi-sausage like thing like this. These sections perpendicular to x3, x1, and x2 are all ellipsoids. And if this semi-axis is a and this semi-axis is b and this semi-axis is c, the equation of the surface in that special orientation is x1 squared over a squared plus x2 squared over b squared plus x3 squared over c squared equals 1.

OK, the surface that we have defined involves tensor elements. And if the equation is ever going to get into this form, we've had to get rid of these cross-terms. And we are going to have principal axes that are some function of the tensor elements that would be out here in the form of some aggregate sigma ij prime.

So we'll have to see how one could go from a tensor that describes an ellipsoid in a general orientation to something that has been diagonalized and has the coordinate system taken along the principal axes. But that's only one possible surface that we could encounter.

The second one is one that we might encounter that, when we put it in diagonal form, has this form, x1 squared over a squared plus x2 squared over b squared minus z squared over c squared equals 1. This is something that has a peculiar shape. It's sort of an hourglass like figure.

It has an ellipsoidal cross-section. This is x3. This is x1. This is x2. It has an ellipsoidal section perpendicular to x3. But it has, as a cross-section in planes that are parallel to x3, paraboloids. And the shape of the paraboloids depends on which particular section that you take through x3.

This is a surface that's called a hyperboloid. And it's all one continuous surface. So this is called a hyperboloid of one sheet, sheet in the sense of surface. And then down in here when x3 is 0, it would again be an ellipsoidal cross-section but a little bit smaller than when we increase x3 above 0.

OK, and the other surface, another surface, a third kind of surface is something that occurs when the terms in front of two coordinates are negative, x2 squared over b squared x3 squared over c squared equals 1. This is something that looks like two surfaces nose to nose. The cross-section of these two different sheets are ellipsis.

The cross-sections passing through-- let's see. This would be the shape for x1. That would be the special direction. And then this could be x2 and this x3. OK, this is called an hyperboloid of two sheets. And it has the property that the distance to the surface in orientations in between the asymptotes of these hyperbolic sections, these radii, are imaginary.

And between the asymptotes, if we take a section, we would have two hyperboloids in a cross-section. And again, a range of directions between the asymptotes in which the radius would be imaginary in directions along the coordinate which has the positive sign we'd have a minimum radius. And this would get progressively larger and go to infinity along the asymptote.

Finally, there's a third surface in which, by extension, all three terms in x1, x2, and x3 have negative signs. And what does this look like?

AUDIENCE: [INAUDIBLE]

PROFESSOR: No, it does. It's something that's called an imaginary ellipsoid. So called because the radius in all direction is imaginary. And what does it look like? You just have to use your imagination. That's all. Because it's an imaginary ellipsoid.

Is that something that we could ever get if we were using the elements of a property tensor? The answer, he says with a big wink, yes. It's unusual but yes. You can get physical properties whose magnitude as a function of direction do this.

All right, I think that is a sufficiently formal and deadening component to our presentation that it would be appropriate to take a break and stretch a little bit. When we return, I will show you an amazing connection between the anisotropy of a physical property and this representation surface. So called because it tells you how the property is going to change.

Free Downloads

Video


Audio


Caption

  • English-US (SRT)