Repost: Simulations December 29, 2010
Posted by mareserinitatis in computers, electromagnetics, engineering, geophysics, research, science.Tags: electromagnetics, simulations
trackback
After reading this post and participating in the discussion, I felt that perhaps reposting this from the old blog was in order.
After posting this morning about how I hate computers, I figured I should temper that.
One thing I hear an awful lot of is how people don’t trust simulations. (They also don’t trust math, but let’s take one thing at a time.)
An awful lot of science can be done through simulations. However, as soon as you tell someone that you got your science out of a computer program that feeds you data or makes pretty pictures, you may have just said you made science with your kid chemistry set and drew your data in crayons.
Skepticism about computer methods is a good thing as long as you know where to draw the line. A couple years ago, I went to a tutorial session on different computation methods used in electromagnetic compatibility (EMC). At the end of the tutorial, a spontaneous discussion about the reliability, drawbacks, and validation of simulations came up. I’ll summarize some of the main points and talk about how I have addressed them.
I guess the first thing to address is that there are many different methods to simulate things, and these methods have drawbacks. As an example from electromagnetics (EM), folks often use something called Finite Element Method (FEM). FEM is not unique as an EM tool…it was actually first developed to examine mechanical engineering problems (think stress and strain). It works very well for electromagnetics as well, with one caveat: whatever your modeling needs to be enclosed. If you don’t have an enclosed area (say a shielded box over a circuit), FEM can’t mesh space infinitely. There are methods that have been developed to deal with items when are radiating in open-space. One is called a Perfectly Matched Layer (PML) which matches the impedance your radiator sees at the edge of the space and then attenuates the field beyond that area.
I give this example because, as someone who has worked with antennas using FEM-based software, it’s important to understand these things. I didn’t, at first, and it took a lot of work to figure out if the software was even simulating correctly.
How did I do it? I used a method that everyone who is a good simulation researcher does: I validated my simulations. In antennas, I started out by modeling simple known devices to see if the results matched the theoretical value. Since the equations to compute these values are based on the same equations as the theoretical value, they should be pretty close. Next, as my devices increased in complexity, I used another computational EM code called Method of Moments (MoM). MoM is awesome because it works differently than FEM. FEM jumps straight into calculating fields while MoM calculates the currents on an antenna (for example) and then is able to compute field at any given point. Once I was able to get simulations that matched either an analytical result or the other code, I could be fairly certain that I’d gotten the kinks out.
Researchers in other areas (say, global climate change) validate as well. While I would assume their approach would have to accurately reflect any analytical results, they can validate more complex code by seeing if their code generates something fairly similar to actual events and known history.
The final step for validation, in my experience, is to take the code and run it using an example of something more complicated. Usually, this is the point where you start looking for interesting journal articles to reproduce.
Now, in all fairness, I know that people don’t always follow these procedures, which is where I believe people should start to be skeptical of results. In fact, the last step of validation can be the hardest even though it’s probably the most important. I know that in my short life time in computational electromagnetics, I’ve had the misfortune of coming across papers which predicted a result, but it’s totally different from my results. In a couple cases, I ended up writing authors to find out that they had misprinted some dimensions on something On the other hand, you don’t want to pursue that route until you’ve exhausted all your other options. In my case, moving part of a device by a just a few millimeters (at high frequencies, a significant chunk of a wavelength) changed the resonance frequency of the entire device. That’s why learning how best to utilize built-in placement functions rather than hand entering things is preferable.
However, those papers aren’t all that common (I hope…but I can say I haven’t hit too many). More often than not, good researchers have tried to test their code to make sure it is accurate and representative of that which they are trying to model. They have also reproduced previous known results to show that their method is sound.
The next time someone tries to tell you it’s just a model, you can reply by asking them how much they know about code validation. If you read this entire post, there’s a good chance you’ll know more about it than they do.
Nice post on the subject.
Climate and weather simulations in general are a different ball of wax. When they do include bits from FEM, the dimensions are much larger out of computational necessity — often missing geographic nuances. They also lack the data we EEs can easily measure with a scope probe, often after the fact. Perhaps the worst of it all is they’re chaotic systems: small changes in the input parameters cause large changes in the final result.
It’s interesting to read Cliff Mass’ (UW Meterologist) blog on the subject. He goes into detail about the different models they use and how the forecasters — at least the good ones — decide when to trust and when to reject a given model’s simulation results.
I only discussed emag modeling because I haven’t gotten into the astrophysical modeling I’ll be doing for my PhD yet. However, I’m starting to realize how many misconceptions there are about modeling, and a lot of the arguments seem to come from people who haven’t done it and don’t understand how well we know what we don’t know.
I’ll have to check out Cliff Mass’ blog. Thanks!
I love writing modeling software. I’m not entirely sure why, but it’s just a fun domain to work in. Alas, I don’t get to do it much.
I’m more on the stochastic side of things, though. Small, easily modeled, dumb agents receiving unpredictable inputs (the stochastic bit), and seeing what kind of emergent behavior is seen in the resulting system. I got to spend 3-4 weeks about a year ago writing network simulations as part of a patent filing.
So do/did you use a lot of Monte-Carlo type simulations?
[...] I’ve already discussed many of the issues on my own blog. I’ve talked about how people who don’t believe in modeling often don’t understand validation, how one needs to understand the limitations of modeling, and also why I find modeling fun. As [...]