The basic ground-work of what has become known as MRI was laid in the 1950’s but it wasn’t until the 1970’s that the first research machines were designed. The two main pioneers, Lauterbur and Mansfield, were awarded Nobel prizes for their work in the field.
The basic concept of an MRI scanner hasn’t changed since that time; but improvements in design, materials and manufacturing technology have yielded devices with resolutions and throughput that the early machines couldn’t even approach. The science at the heart of an MRI scanner involves the electromagnetics behind different coil types operating at different frequencies.
A very high, homogeneous DC field aligns the magnetic spin of atoms. Gradient coils superimpose a field to ensure that only a thin section of the volume is at the correct field to exhibit resonance.
A short RF pulse excites the energy state of the atoms in the section. Sensors record the relaxation time of the excited atoms; this relaxation time identifies the material. The frequency required to elicit a response from the tissue’s magnetization is dependent on the field strength of the externally applied magnetic field. For example, at 1.5 Tesla, the RF frequencies would be in the range of 64 MHz. At 3.0 T, the frequency required would be in the range of 128 MHz. Thus a slice of material can be developed, and by moving the field gradient using the gradient coils, and pulsing the RF coils, several slices can be assembled into a full 3D solid map of the soft material in the scanning volume.
A critical factor in the design of an MRI device is the base DC field. The higher the field, the better the spatial and contrast resolution that can be achieved, and the associated higher frequencies can mean faster scan times. But whatever the strength of the base field, it must be homogeneous to a few PPM over, typically, a 30cm diameter sphere to give an artefact free picture.
There are two main types of MRI scanner in the marketplace. The closed type has a tubular form, with coils surrounding the patient. The very high axial stray field is countered with specific shielding coils –these are additional coils at a larger diameter than the main MRI coils that produce an opposing flux to the stray field.
In the early days of MRI, to achieve the necessary central field, the proximity of the coils led to some patients complaining of claustrophobia, so the Open form was developed. This type uses a ferrous core as a return path, then pole pieces to smooth the field and provide an area in which to scan the patient.
Because of the saturation of the ferrous core these systems operate at a lower base field than the tubular closed scanner, and stray fields will be lower.
So for today’s presentation we’ll be looking at the closed form of scanner. For a whole-body scanner the only way to realistically achieve the DC magnetic field is by use of superconducting coils – resistive coils would generate losses that were too high for practical use.
Because of the very high stray fields present, the magnets must be shielded, to protect the environment around the magnet, and to protect the magnet from the environment. Perhaps the effect of not shielding an MRI magnet is best demonstrated by what happens when an object finds its way inside a shield.
Because of their spectacular nature, some of the incidents observed when shielding regulations are ignored can find their way into popular press, or online. Google something like MRI magnet accident and you will see a large number of photos of everything from wheelchairs to floor polishers clamped to MRI magnets. It’s often referred to as “The Missile Effect” when something is dragged into the magnetic influence of a high-field magnet.
The force, and hence acceleration, will be roughly dependent on the inverse of the distance from the magnet squared, so once something ferrous is moving in the field, it’s difficult to stop. Non-ferrous metallic objects will have eddy currents excited in them by the gradient fields, and can become hot. So, again, anything metallic should be treated with extreme caution inside the shielded area. Individual issues around MRI shielding in use are recorded in MAUDE. So let’s take a look at the Federal requirement around shielding and how this affects the design process.
Guidance on site planning has been around since the 1980’s. Already understood were the effects of the magnets on ferrous objects, and vice-versa. Guidelines were available regarding distances, fields and orientations. Compute power in the 1980’s, and even early 1990’s was insufficient for accurate modelling of the shields, which led to estimates being used for many shields that were considerably over-designed. But the advance of simulation technology and compute power have now made accurate calculations routine for a shielding designer.
The picture shown below represents a simplified version of the topology of an MRI magnet. The red coils are all superconducting solenoids.
There is a federated limit for stray field and therefore the surrounding area needs shielding from the magnet so we’ll take a look at this. The globally accepted legislation on humans subjected to a DC field states that the field must be below 5 gauss. This is so as not to pose a risk to medical devices such as pacemakers as outlined in the early guidelines. So that is the first, and main, job of the shield. To contain the flux a level of 5 gauss in areas where humans may be present. The secondary function is to shield the magnet itself from exterior sources. We don’t want power cables, transformers and the like to cause artefacts on patients’ scans when they switch on at an inopportune time. So the magnet itself needs electromagnetic shielding in order to perform accurately and reliably. The shielding, therefore, can be considered to be two-way. And, of course, all of this must be performed at minimum cost and with minimum structural impact on the building itself.
Today we’re looking at Finite Element Analysis in the design process, and in particular, Opera FEA software. If we’re looking at the design of the base DC magnet, or the shielding for it, then we need to use a magnetostatic solver, with nonlinear capabilities. Opera uses just one degree of freedom per node, which makes it as efficient as possible in the solution of the model, in terms of both memory usage and solution speed.
When defining the shield in a Finite Element Analysis system you would normally define a volume which represented the 3-dimensional limits of the individual plates of shield material. This can lead to numerical difficulties due to large aspect ratio elements if you don’t use facilities such as Opera’s mosaic meshing to introduce hexahedral elements. Large aspect ratio tetrahedral can result in inaccurate answers or convergence problems. So in Opera we have additionally implemented the means to represent the shield material as purely a surface with a property giving the material and thickness. The user can make use of laminated materials, as many shields nowadays are made of laminated steels or mumetals. This surface representation, or thin plate boundary condition as we call it, eliminates any issues around aspect ratios, and numerical difficulties.
MRI designers tend to look at certain criteria to judge a successful design, so in an FEA system you’ll need to be able to view iso surfaces, fields on planes and calculate associated Legendre polynomial coefficients. We’ll take a look now at how all of these are achieved in Opera. In FEA there are different ways of modelling coils; which is most accurate and efficient can vary with the type of problem being solved. Firstly, we can use analytic representations of fixed-current density coils. These are placed independent of the mesh – the conductors don’t form part of the mesh in the solution. Hence models can be easier to be define.
Alternately, we can use a filamentary representation. Or we can choose to mesh the coil. For MRI base magnet or shielding calculation design, where accuracy is paramount, we use an analytical coil representation. The unique formulation of the Opera-3d magnetostatic solver overcomes the problems of cancellation errors associated with a general-purpose solver.
When modelling a structure such as a shield where the thickness of the plate is small in comparison to its area a volume representation is likely to exhibit one of several issues in the modelling process:
Firstly in the geometry – creating volumes that accurately reflect details such as the corners is time-consuming and prone to errors. Secondly in the meshing – elements are likely to be used inefficiently in areas, and exhibit poor aspect ratios in others. Third – the poor aspect ratio elements can cause numerical issues during the solution process, especially in a nonlinear problem. So, as an alternative, in Opera you can make use of the so-called “thin plate boundary condition” to represent the shielding material as purely a surface during the modelling phase. This addresses all of the issues mentioned previously.
More information is available in a paper published by Chris Biddlecombe and Chris Riley; contact us if you’d like a pointer for that. Here we see a simplified model of a shielded room, as defined using the surface representation of the shield. The shielding material, in this case steel, is defined using just a surface. The Thin Plate Boundary Condition is assigned to the surface.
When it comes to defining the model in Opera you have a number of choices. You can define geometry using our own Modeller. Primitives are created then operations can be carried out to intersect, combine and trim etc. Once the geometry of the room is defined, properties are added for the various materials, the magnet defined and then the whole problem is enveloped in air for solution.
Rather than defining the room geometry in Opera’s Modeller, you can read a model of the room from a CAD file. This, again, shows the ease-of-use of the Thin Plate Boundary Condition in that the user need only select the walls that are shielded – there’s no requirement to then create them as thin meshable volumes by sweeping walls. This process of reading a STEP file, and selecting the walls with shielding is shown in the video that’s playing.
Finally, all of this can be scripted to automate the process. We have a sample script which defines the room, runs the analysis and produces the relevant output – contact us if you’d like a copy for your own Opera installation. Moving onto post-processing now, here we see two functions commonly utilised by MRI shielding designers.
A technique employed by designers of MRI magnets for assessing field homogeneity is to calculate associated Legendre Polynomials on a spherical surface. That harmonic series is a solution to Laplace’s Equation. Laplace’s Equation is valid in a source-free volume, and there are no sources within the central scanning volume. So from this equation and the Legendre Polynomials we can assess the field anywhere in the central volume. Looking at these coefficients is a simple way to compare the homogeneity of different magnet designs, and the relative success of design modifications. Smaller higher order harmonic coefficients mean a more homogeneous field. Opera-3d provides designers with a direct way to calculate these Legendre Polynomials and list them for any model.
The zero-zero term is giving a value for the central field, since r is zero, and hence we can see that this is a 1.5T magnet. Looking at the important terms such as 2-0 and 4-0 for the unshielded magnet, and magnet in-situ, allows us to assess how the shield has affected the homogeneity of the magnet, and whether the system is still operable.
The issue that we are considering when looking at MRI shielding is a class of mathematical problem that is suitable for an automated optimization technique.
Since Opera contains an integrated Optimizer module we’ll look at that here to consider how to produce an optimal shield design, rather than an inefficient over-engineered example.
Opera’s Optimizer can solve constrained or unconstrained problems. The variables to be used are specified during the definition of the model. We can specify variable dimensions within the geometry, we can specify variable parameters such as the plate thickness. We can vary material properties, or the problem excitation can be defined as a parameter. As well as being multi-parameter, Opera’s Optimizer is also multi-objective – we can set more than one objective for the design exercise at a time. Which is important in this case, since we can simultaneously consider an optimal design for field containment and central homogeneity.
When we have a multi-objective solution we will typically end up with multiple designs to consider, and for this we use a Pareto optimization technique to determine the family of best designs. The objectives are specified by the user by means of a script in the Post-processor. The script can be written by hand, or can be developed from a session logfile. Here is a simple example.
The first objective is to minimize the amount of steel used in the design. The second objective is to minimize the total flux behind the shield, and the third objective is to minimize the perturbation of the field homogeneity in the central volume of the magnet. We might have two constraints for the type of problem.
Firstly, we have our 5 Gauss constraint for the stray field. And, for shimming purposes, we might specify a constraint of a change in the central field of the magnet of less than 15 Gauss.
The parameters that we are giving the Optimizer to work with are the dimensions of the plate – it’s length, height and thickness.
Within the Optimizer the user can specify limits for the input parameters. So here we’re specifying that we want to vary the height, length and thickness, and giving limits over which the Optimizer can vary them.
Earlier in the presentation we mentioned that output variables are calculated using a script in the post-processor, so here is a sample. Firstly, it calculates the field on a square patch just behind the plate, and for documentation purposes, it outputs the fields to an internal buffer. Secondly, it carries out the Legendre Polynomial calculation on a sphere of radius 25cm. We then prepare the variables to pass back to the Optimizer. #intB is calculated as the integral of B on the patch and #maxB is calculated as the maximum value of B on the patch.
The most significant Legendre Polynomial coefficients are normalised to the central field before passing them to the Optimizer. We want to check that the value of the central field hadn’t varied out of range so we calculate that next. Finally, we calculate the volume of the shield.
The variables calculated using the script are passed back to the Optimizer and we select which to use as objectives. For this particular instance we’ll make use of the second and fourth harmonics, and the volume of the shield. The constraints are specified as the maximum stray field should be less than or equal to 5 Gauss, and the change in the fundamental field should be less than or equal to 15 Gauss.
The output of the Optimizer lists all of the trial designs, and ranks the potential solutions in order. The analyses in blue are the highest ranked, and satisfy both constraints. Further down the list will be some designs that have not met the constraints. Here we see the results graphically. On the left is the Objective function space. The vertical axis is the volume of the plate.
On the right we plot the Design variable space. The vertical axis represents the plate thickness, the horizontal axes the length and height. You can see a cluster of Pareto optimal designs along a line representing a height of 250-300, since one goal is to minimize the volume of the plate. Here we see a map of the constraints during the Optimization process.
By clicking on any potential design in the Optimization list we can see the details of the parameters used. When running the Optimization process the user can choose how many models are saved. To reduce disk usage, the user can choose to save none but then rebuild a required model after the fact by right-clicking on the sample in the Optimizer pane.
Now we’ll look at the sort of problem presented by the common problem of upgrading an old 1.5 to a new 3 Tesla magnet. The installer of the 3T magnet has the support of the magnet manufacturer, so goes through the sort of process we’ve outlined already. Rather than being thrown away, the 1.5T magnet tends to get sold onto a second hospital. There is no magnet manufacturer involvement.
So the only information that the installer has to work with is the stray field plot. The stray field plots give the magnitude of the field, but not the directions.
So to calculate shielding requirements for this scenario we need to first develop a model of the magnet. The advantage that the shield designer has in this scenario is that they are using the same tool as the base magnet designer so some assumptions can be made, and they are not usually concerned about the magnet homogeneity. If they are concerned about the mechanical forces generated by the magnet this is available in the Opera post-processor.
Here we see a typical stray field plot as supplied by a magnet manufacturer, superimposed on a section of the desired installation space.
We know that the stray field map has only magnitudes, and not directions. We know that on the magnet axis, the field is axial. We know that on the centre plane, the field is also axial. The magnets are designed to be symmetric in nature, so we know that the stray field map is symmetric. Finally, the major assumption that we can make helps greatly – we know that manufacturers typically use only a few different coil arrangements.
Here we see two different magnet configurations used by manufacturers.
We know from the measured stray field map where the 5-Gauss line intercepts axes, so we can setup different objectives based on those axial values. From those values we can create post-processing scripts as shown. For example, we know that the 5 Gauss line crosses the Z-axis (ie the magnet axis) at 420.5 cm. So, if we calculate the field at that distance, we can minimize the difference between the calculated and required value as one of our objectives.
The model we’re solving in this problem is simple – just coils and a mesh of air surrounding the coils, with both a reduced and total scalar potential region. For this example we have 3 variables defined. They are scaling factors applied to the current density, the radial dimension and the axial dimension. Customers have used many more variables in real examples with coil cross-sections and individual dimensions being varied. We have 2 objectives: to minimize the difference between the calculated stray field at the crossing points, and the desired value of 5-Gauss. We set constraints to make sure that the field is at least 5-Gauss because for a factor of safety we want to ensure that we have not under-predicted the field from the coil-set.
This Optimization run produces a set of prospective designs. Here we see the Pareto optimal front.
The TEAM benchmarks are not limited to simple AC problems with surface flaws. TEAM Problem 27 uses a transient excitation, and a series of flaws buried within the sample. The sample is a cylinder of aluminium with a hole down the centre. Three shapes of flaw are simulated; these are located on the inner surface of the cylinder. The extent of the flaws is modelled explicitly. All three flaws are physically present in each simulation – the “active” flaw is determined by setting the material properties of the corresponding region to those of air. A fourth model is also used in which no flaw is present. This method allows us to use the same mesh for each model and thereby exclude mesh-related numerical inaccuracies. The probe is a coil of wire and a pair of Hall Effect sensors. The coil is positioned co-axially with the sample. The probes are located level with the base of the drive coil, at a distance of 5mm from the axis. One sensor is directly above the flaw; the other is diametrically opposite. The current in the coil is switched off suddenly, and the sensors detect the magnetic flux density from the induced eddy currents. The probes themselves are not explicitly modelled – we simply sample the field components at the appropriate locations in the model. We use a Biot-Savart coil – this provides a drive to the model but does not form a part of the mesh. The model uses reflection symmetry in the plane passing through the centre of the flaw (and through both Hall Effect sensors), with magnetic fields set to be tangential to this plane. This allows us to reduce the resource requirements for model solution.
The problem definition is to observe the difference in horizontal flux at the two sensors following the turn-off of the coil currents, so we use a transient electromagnetic solution, rather than harmonic. The initial 35ms allows any transients induced by the turn-on of the coil to decay away, so the testcase measurements begin at 35ms. The solution begins at time=0 with a DC current of 1.5A in the coil. Modelling the first time-point as DC allows us to skip the initial 35ms, as we are only interested in the effects of the current turnoff, not the turn-on. After this initial solution, the current is switched off. We use an adaptive time-step method to accurately capture the effects of this sharp transient.
Moving away from Eddy Current testing now, and onto Magnetic Flux Leakage. The MFL measurement principle is based on the fact that when strongly magnetizing a steel tube, some of the flux will leak out of the tube. Flaws in the tube that reduce the wall cross-section alter the leakage pattern. The shape of the flux leakage, therefore is dependent upon the defect’s geometry. The presence of the leakage field at the surface of the material can then be detected by sensors such as coils or Hall probes or it can be observed visually using magnetic particles. The specimen to be tested is usually magnetised by applying a direct current, with the form of the coil dependent on the application, or using permanent magnets.
Let’s now take a look at some features of Finite Element Analysis which are crucial in successful, efficient use for solving magnetic flux leakage simulation. Because the method relies on inducing a high magnetic flux density in the pipe accurate nonlinear material modelling is a must. Sometimes, this is achieved with permanent magnets, so accurate magnetisation modelling is also a must.
High-speed non-destructive inspection systems using the MFL method are in great demand for online inspection and defect characterisation, especially in pipeline maintenance. Such schemes aim to deduce the shape and dimensions of a flaw, from the voltage induced in a pick-up coil moving relatively quickly along the structure. MFL simulated with purely a velocity effect gives rise to an additional difficulty, in both signal interpretation and modelling, due to the formation of eddy currents when an excitation source moves over or inside a metallic pipeline. Numerical approaches to this problem include two schemes. Firstly it can be solved quasi-statically, with an extra movement component added to the formulation. The formulation in Opera, which we will come onto later, avoids the numerical difficulties, ie inaccuracy and non-convergence associated with increasing relative velocity and material’s magnetic permeability. Or, secondly, the relative movement of the probe and pipe can be analysed over several time-steps with the problem automatically repositioning and re-meshing at the solution progresses.
Competing goals and constraints such as those mentioned earlier mean that an automated Optimizer can often be the only way to achieve an optimal design. Static MFL methodology (or Magnetic Particle Inspection) involves magnetizing a portion of a structure and recording the flux at the surface. Usually a local magnetization close to saturation is required, because a leakage flux amplitude is generally proportional to the magnetization level. For something like a bulk liquid storage tank a magnetizing field of 1.5 to 2T is commonly used. This allows for scanning on both sides of the wall. Using a lower field might require multiple passes to fully determine the location of the flaw, and this can be difficult because of residual magnetism in the steel after passing over the MFL scanner.
The most common sources of a magnetizing field, electromagnets or yokes with permanent magnets, are used. The solution of such a problem is relatively simple with FEA – the material nonlinearities present provide no challenge to a solver such as Opera provided the B-H curve data is suitably accurate and smooth.
This type of device is commonly used on structures such as storage tanks where the walls or floor are subject to corrosion. Constraints on the design may revolve around physical sizes of access hatches, or static load limits, so an Optimizer such as Opera’s can be useful in achieving the required flux density from a given size and weight of device.
When analysing a moving MFL system, you can make use of two different analysis types. Firstly, you can model a moving conductor in static field by merely applying a loading, and assume that the cross section of moving and conducting media are uniform. You might use this to check the saturation of the flux into the specimen. It will not predict the signal generated by a particular type or shape of flaw. For that, you would need to use a transient analysis, whereby the pieces move relative to one another, and the problem transiently re-meshes as the solution progresses.
Naturally, the former type of analysis takes less resources to perform. We’ll now look at these two analysis types, and what they might be used for. When running a problem whereby the mesh is constant, and the relative motion is applied as a velocity loading in a standard FEA solver you will risk non-convergence of the solution. The main reason is that diagonals of conventional FEA equations approach zero, or even negative, when the velocity increases. The Opera-3d solver used in this instance, on the other hand, makes use of a technique called upwinding.
Numerical solutions without upwinding display oscillation of current distribution where the oscillation is non-physical. This phenomena is quantified by the Peclet number, which is proportional to permeability, conductivity, velocity and element size. Opera-3d introduced upwinding by modifying the FEA weight function to rectify the problem; it’s a Streamline Upwinding with Petrov-Galerkin method (SUPG) in vector form. This has allowed greater speeds to be defined without the need for overly-refined meshes, and hence the solution times have been greatly improved.