An example from my own career while I was working in the US for US-Canadian client with a vague approach to units
consistency (the Americans used inches, slugs etc and the Canadians mm, N and so on) I managed to build an FE model
with the overall dimensions in inches but the shell thicknesses specified in mm. My model was rock solid, no
deflection, no stress to mention, but mainly because it was 25.4 times too thick, which became apparent during other
model checks when I looked at the mass of the FEM.
The point is that it ran and gave me the wrong results.
With a linear model you can apply 1N and it deforms 1mm, but you can also apply 1MN and it will deform 1 kilometer – the linear statics solution is happy.
Make the same mistake with a non-linear analysis and it probably won’t
converge. If that was the only thing then it would be relatively simple to debug and get answers.
you’d included plasticity, failure, contact, follower forces etc to the model the set of conditions that
results in a failure to converge can be much, much larger than that where the solution completes.
So, how do we approach these models? With a stepwise process, gradually building up the complexity and checking the model and results at each stage.
As usual, we’ll illustrate this with an example. We want to simulate crushing this laminated CFRP angle test
piece between two plates, considering ply failure and delamination.
So we have a number of complexities to consider beyond a ‘normal’ linear static, isotropic material
- Large displacement/rotation
- Laminated 3D orthotropic material
- Ply failure based on stress
- Delamination between layers
We could build a model incorporating all of these aspects and run it. If it completes first time, brilliant, but if
it doesn’t we need to debug to understand what the source of the problem is. This generally means turning off
a lot of the complexity and adding it back in, so why not just do this from the beginning? We’ll run through
the stages we might consider as part of a robust model creation process below.
We’ll just check that the contact conditions are correct and that moving the upper plate towards the lower
plate achieves the kind of response we’re after. A common problem when moving to sliding contact problems from
simple boundary conditions is that it’s easy to under constrain the model.
In this example we have four contact bodies, the red one is the deformable test piece, the blue is the ground and
the yellow is velocity controlled and will move down to crush the test piece. We’ve introduced the green rigid
body to control sliding in the X direction – we’ve set a separation force between the red and green
parts to be very high so as the part compresses it cannot move through or away from the green end stop but can move
within the plane of the green rigid body. We’re starting out with a simple isotropic material here where
we’ve set values for E, nu, G taken as equivalents to the eventual layup. This eliminates issues such as
incorrectly defined layups from the problem to begin with. We run the model as is and this happens.
The contact conditions are not enough to constrain the model in the Y direction, so just the tiniest numerical
imbalance is enough to squeeze the part out from the rigid surfaces. We can address this with a simple node
constraint in Y applied to one of the nodes, for which we will check the reaction forces to make sure it’s not
a large effect. Re-running gives us this result.
Looking at the reaction force of the added constrained node we can see the magnitude is around 1e-4N, so negligible.
The reaction force for the compressing rigid body is around 2.3e4 N.
So we have established that the contact conditions are OK. The component is squeezed between the parts, it slides
where we want it to slide and the force displacement response is nice and smooth.
Our next step is to update the material definition to 3D orthotropic. We have a five layer laminate.
Since we want to ultimately include delamination between layers we have 5 elements through the thickness of the
part, each defined as a separate but identical material and we use coordinate systems to define the orientation of
each layer to achieve the 45/0/45/0/45 layup we require.
We run this model to check that we have a symmetrical layup, failure to do so would be evident from the deformed
shape of the part. The plot is of displacement in the out of plane direction and appears symmetrical. A detailed
examination of the results bares this out.
The force required to compress the part shows that as the displacement increases the isotropic material
approximation is very poor indeed.
We now have a stable layered orthotropic model. Our next increment of fidelity is to add failure criteria. We do
this by adding a damage model to the material and specifying the limit on, in this instance, stress.
We can use the progressive failure model so that when a failure criteria is exceeded (i.e. the stress component
divided by the allowable reaches 1.0 or higher) the stiffness of that component is reset to 1% of the original
value, thereby simulating the failure of that ply of that element.
We can now plot the failure criteria as the part is crushed and see the effect on the force displacement curve of
the ply failures towards the end of the load stroke.
Our model now has the failure of the laminate included so our final step is to add delamination. There are a number
of models to do this, but we are going to use the simplest. We define a shear and tension stress limit for
interlaminar failure between defined materials.
When the combination of tension and shear damage exceeds the allowable the software separates the shared node,
allowing the plies to peel apart. We can run this model and see the effect. We’re plotting the delamination
index which causes delamination when it exceeds 1.0.
The effect on the force/displacement response is very obvious.
We’ve got a pretty good model of this event now. We have an accurate laminate material that can fail
individual elements in one or more stiffness directions and is capable of modelling the separation between the
layers without first having to define where it can take place. We only had the one real issue which was the contact
problem at the very beginning, and since the runtimes for these steps using parallel processing in MSC Marc are no
more than 4 minutes each we can achieve this within an afternoon’s work. This example was specific to Marc but
the ethos is applicable to any FEA code. As you build experience and confidence you can skip or combine some steps,
but this is particularly useful when using a feature for the first time.
If you have found this useful, have any questions, or if you are interested in modelling composites in this kind of
detail please get in touch.