top of page

#026: Modes - Introduction

I think the concept of ‘modes’ is often misunderstood. I certainly have been confused at times (and sometimes I still am). I think some of the confusion comes from textbooks, and blog posts, not being concise enough in their description, and also that the concepts are introduced differently in different courses. And so, in an attempt to clear up some of this confusion I have made a blog post series on the topic, that hopefully helps pin some important points. I have tried to keep it light enough on the mathematics (there is quite a lot in this first one) that it doesn’t turn into a journal paper, but hopefully comprehensive enough that all points made are backed by mathematical reasoning. It is much longer than I was initially aiming for, and so I have split it into several posts. Comments will, as always, be greatly appreciated.

Let’s first get some definitions out of the way:

  • In engineering, the term ‘modal analysis’ is often used for describing a technique, (mathematical or practical measurement) for finding and describing resonant behavior via a certain so-called ‘mode shapes’ with associated ‘resonance frequencies’ or ‘modal frequencies’.

  • In mathematics, you are more likely to see similar characteristics described with the term ‘eigenvalue problem’. Here, you will also meet the terms ‘eigenvectors’ and ‘eigenvalues’.

  • In specific engineering classes, such as e.g. structural mechanics or acoustics, the two above concepts are often mixed together in a manner that is not always conducive for the students’ understanding of each separate topic and how they are interconnected.

We can approach the topic in different manners to address the above points. There are many topics that should be touched upon first, like

Single vs Multiple Inputs and Output

Single degree-of-freedom (SDOF) vs Multiple degrees-of-freedom (MDOF)

Forced vs natural response

Transient vs steady-state response

but I will try to introduce them as needed instead. First though, a little mathematics is needed…

Mathematical description

We seek to describe a system via a number of state variables, collected in a vector x. We jump straight to its matrix form


On the left side is a vector containing first order time derivatives of the state variables. Multiplying out the right side we see that each component in the vector on the left-hand side is a weighted sum of all state variables on the right-hand side (RHS), so for example for a 2x2 A matrix we get


and


for a system described via two state variables. As one state variable might already be a time-derivative of another state variable, we can describe higher order systems via a number of first order systems. This form describes the dynamics of the system and should be cast in its smallest possible form.

A good guess for a solution this first order matrix system is an exponential function, as its derivative is itself times a constant. With that, the state variable vector is written as


where a common time-dependency has been pull out of all state variable components. By inserting in the original equation, we get

and hence

as the vector v has no time dependency, and the exponential term is not affected by the matrix multiplication.

 

Short intermezzo: Before moving on we will write explicitly an equation used in the above, namely

It may seem as an obvious fact, but that is only because it is usually taught at an earlier stage and typically to a broader audience than what we are currently discussing. However, the abstraction level is similar. We will use the above equation shortly.

 

The time dependency can now be removed, and we end up with (flipping RHS and LHS).

This form describes a so-called eigenvalue problem: In some instances, i.e. for some combinations of λ and v, the act of multiplying the A matrix with the vector v gives the result of simply returning a scaled version of v, namely λv. It is hopefully easy to imagine that this will happen only for unique combinations of values and vectors (as an exercise try with random a 2x2 A matrix to find such combinations by hand). For each such combination, there is a specific λ value, which is the eigenvalue of the system, and a specific associated eigenvector v. As shown below, the resulting 'eigenstate variables' are found by multiplying the time-dependency with the eigenvector, where the index keeps track of the number of eigenvalue/-vector:

The total solution for the state variable vector is found as a linear combination of the above ‘eigenvectors with time-dependency included’ as

with each component multiplied by a coefficient to be determined via the initial time state.

If the above seems a bit awkward, try instead to look at the equation from earlier:

You probably feel more confident with that equation, and so you can use it to feel more at ease about the concept of eigenproblems: On the LHS we have a differential operator working on a function, with the result being a scaled version of the function itself. This is very similar to matrix A acting as a matrix operator on a vector with the result being a scaled version of said vector. So, for this specific operator (the differential operator) there exists a function (the exponential) that behaves in this special manner (the result of the operation is the function itself, only scaled), and we call this function an eigenfunction to this operator with an eigenvalue of λ. For other operators this function may not be an eigenfunction (try thinking of another operator, like squaring, and plug in the exponential. Do you get a scaled version of the exponential as output?), but to these operators other eigenfunctions may exist: Try thinking of an operator to which a cosine function will be an eigenfunction. Also, can you think for example of an eigenfunction to the second derivate operation?

Similarly to the above, we have that to the matrix A there can be certain eigenvectors with certain eigenvalues. There is a kind of ‘self-sustainability’ when operating on these particular eigenvectors with A, which also hints to the dynamics of the associated physical systems…

Physics

We know that dynamic systems can potentially express resonant behavior, which may or may not be wanted, depending on the application. We don’t want bridges to collapse when excited by wind, but we do want electric filters to have very distinct peaks in their responses. Dynamic systems have energy-storing elements in them; for example, for the bridge there is mass (kinetic energy) and stiffness/compliance (potential energy), and for such a system there is a possibility that at certain frequencies, energy can be exchanged back and forth between the two in an undesired manner. At low frequencies the applied force goes into compressing stiffness elements (pseudo-static, stiffness-controlled), but the accelerations are low, so the mass takes very little force to move. Conversely, at high frequencies a lot of the applied force goes into having to rapidly change the momentum (mass-controlled).. In between though, there can be frequencies at which the two effects match up in an ‘oscillatory synchronicity’, and where the constant application of excitation can have the response run amok. The number of degrees of freedom will determine how many of these frequencies there will be. Luckily, dynamic system will have loss that limits the response, but in the design process these resonant behaviors must be considered.

Mathematics and Physics

The matrix A is determined by the physical setup of the dynamic system. Its components will be determined both by the topology/geometry (how things are connected in the system) and by the associated material parameter values like density, inductance, or whatever characteristic values are needed to describe the physics in question. The state variable type will depend on the physics. For structural mechanical systems the state variables will be related to energy relations for the energy storing elements, so that displacement and velocity (and derivatives thereof) are good candidates. For electrical systems the state variables can be currents and voltages (and derivatives thereof).

Finding the ‘eigen-characteristics’ (eigenvalues and eigenvectors) of a system will give us insight into what kind of behavior to expect from the system, both when it is forced/excited and when it is not. We will in upcoming posts try to connect the mathematics and the physics across several different types of physics in several different dimensions.

Next post

In the next post we will look at zero-dimensional problems with single or multiple degrees of freedom. The physics involved will be electronics, structural mechanics, and acoustics. Also, I will try and focus on why in universities there often seems to be a partial disconnect between eigenvalue problems and engineering problems.

Upcoming posts

Modes – 1D (looking at e.g. a mechanical string vibrating and acoustic tubes)

Modes – 2D (looking at acoustic modes in a room)

More will probably come on 3D and other stuff, but let’s see.


bottom of page