jump to navigation

Fields of irony July 1, 2013

Posted by mareserinitatis in career, engineering, geology, geophysics, grad school, research, work.
Tags: , , , ,
add a comment

When I started thinking about what I wanted to do for grad school, I thought geophysics was a good option because I enjoy getting outside.  I figured that if I were doing something related to geology, that opportunity would present itself much more often than in electrical engineering.  I suppose this idea came because I was used to spending most of my time in a 10’x20′ windowless room…or a much bigger windowless lab.  Either way, cabin fever sets in quickly when one is deprived of fresh air and sunshine most of the day.

Unfortunately, I discovered I wasn’t as crazy about ‘outdoor’ geology but fell in love with computationally intensive topics.  I love getting outside and collecting rocks, but I view it more now as a hobby than as a career path.

Recently, however, I’ve been working with some people in another department on a project.  This new project will probably require me to spend some time outside doing field work.  It’s rather ironic that I may end up getting my outside time because of a project I’m doing in electrical engineering.

I guess it all works out in the end.  Now if I could find a way to teach programming outdoors…

Repost: The varied and graphically-intensive world of nomograms March 3, 2013

Posted by mareserinitatis in electromagnetics, engineering, geology, geophysics, grad school.
Tags: , , , , ,
add a comment

I spent a good chunk of time yesterday dealing with Smith charts, and I remembered in the recesses of my brain that I had once posted something about them in the old blog.  Sadly, it wasn’t as technically intensive as it could have been, but I still decided it was fun enough for a repost.  If you would like to read something with a bit more technical content, you can check out Fluxor’s post on Smith charts at EngineerBlogs.

A nomogram is an incredibly useful tool. It is a visual “solution” to an equation. Usually it is some sort of chart or plot that allows you to figure out “what you’ve got” and you can move from there to “what you need”.

Anyone who works on the analog side of electrical engineering often gets to play with Smith charts, which were of course invented by Baker*. They’re rather confusing looking things:

The usefulness in Smith charts is that they can allow you to determine things like how much more transmission line you need to get an impedance match in your device. Rather than trying to solve an equation using complex values, you can just move along the curve in a Smith chart. (Disclaimer: While I learned how to use Smith charts in my microwave engineering course, I unfortunately would need to spend some time with my buddy Pozar to remember how to do it now.) I’m also aided in my negligence by the fact that there are a lot of nifty software programs that will compute the necessary values, reducing the necessity of using a Smith chart. (Thank goodness for computers. If it weren’t for computers, I’d probably have to learn how to use a slide rule, too.)

What brought this up is that I was introduced to a nomogram used by scientists in the field of paleomagnetism. The nomograms in this case showed relationships in demagnetization of magnetic minerals. For instance, if you have a mineral that has been exposed to a temperature of 400°C for 1000 seconds in the lab, you can follow the line on the nomogram and discover that the same amount of demagnetization could be caused by sitting in a temperature of 350°C for 100 million years.

So why do I spend time mentioning this on my LJ? Could it be because knowing that there are graphical methods to approximate solutions to problems is good to know? It is good to know, but it’s not why I bring it up. The reason I felt the need to post about it is because I had an entirely different picture of nomograms when I was sitting in class:

tastee nom-o-grams

—–

*Just kidding. It was developed by Phillip H. Smith.

A totally subjective ranking of socially clueless people by career October 15, 2012

Posted by mareserinitatis in career, engineering, geology, geophysics, math, physics, science, societal commentary.
Tags: ,
3 comments

I have no data to back this up.  However if someone has the time and inclination, I’d love for them to get some and validate my hypothesis.  I’m assuming the Autism Spectrum Quotient would be a good place to start.

There is a noticeable difference in the general cluelessness of people, and of science and engineering types in general.  I’ve been pondering, however, if anyone has done a serious study of this phenomena and provided a ranking system.  This might come in handy for non-sciency people, especially relatives.

I’m going to postulate a ranking, but please feel free to give me some feedback as to where you think this system falls down.  And again, data is gold.

So the following are ordered from most to least clueless:

  1. Physicists and mathematicians (and I’m sure they hate having to be in a group with another group)
  2. Electrical engineers and economists (I’m just throwing in the economists because while I’ve noticed they aren’t socially clueless, there are a number who may as well be given the way they act)
  3. Mechanical engineers and computer scientists
  4. chemists and geophysicists (the problem with geophysicists is that there’s a huge standard deviation ranging from geologists to physicist…and a heavy dependency on how much alcohol they’ve consumed)
  5. Biologists and manufacturing engineers
  6. Civil engineers and soil scientists
  7. Geologists (because they always bring the alcohol)

So what do you think?

Students finding their direction June 23, 2012

Posted by mareserinitatis in education, engineering, geology, geophysics, physics, research, teaching.
Tags: , , , , ,
add a comment

The younger son’s birthday was this week, and we opted to host a pool party at a local hotel.  (IMO, pool parties are the best for the elementary school age group: they keep themselves busy and then go home exhausted.)  I was checking in when I noticed a young man standing at the other end of the counter.  He looked familiar, so I asked if I knew him.

“I took your class last fall.”

“Oh great!  How did the rest of the school year go for you?”

“Great.  I actually switched to business and am really liking it.”

“Really?  Why did you switch?”

“I just figured I liked business a lot better.”

“That’s why they have you take those early major classes – so that you find out you don’t like it before you get too far into it.”

I think the poor kid thought I would be mad that he had switched.  But I wasn’t mad at all.  If he feels like he’d be better off in a different major, then he ought to go for it.  And that is part of what I’m trying to set out in the class – this is what engineers do.  If it doesn’t look fun, then you ought to think about a different major.  That’s a perfectly valid choice, and no one should judge a student for it.

(Yeah, I know…I sit here and wring my hands because older son gets these obnoxiously high scores in math and science but wants to be a writer…I’m one to talk.)

But seriously, I actually think it’s sort of silly to make students choose a major really early on in school.  I think it’s a good idea to try to take a lot of classes in different fields before you really choose.  I say this as someone who major hopped a lot during undergrad.  I spent some time in physics, chemistry, journalism, and graphic arts.  I finally decided that I liked physics after all, but what got me excited was geophysics.  I happened to take a geology class when I was at Caltech because I had to take a lab course, and everyone told me geology was the easiest.  Turns out, I really liked it and did very well in the course.  (Of course, later on, I found that geology feels too qualitative and prefer geophysics, so it all worked out.  On the other hand, I think I would’ve liked geology better if it had all been field courses.)  :-)

I have run into people who got upset with me for this type of thing.  I was doing research with a professor in undergrad, but I felt like the research wasn’t going well and got sort of excited about a math project that I’d seen a professor give a talk about.  I talked to that professor to see if he’d be interested in having me as a student, which he was.   When I told the other professor that I was going to work with the math professor, all hell broke loose.  (I still think I made the right choice, though, especially since the first project really never did go anywhere.)  I have yet to figure out why the first professor got upset, though, and did some petty stuff, like kicking me out of the student office (despite no one needing a spot) and having the secretary take away my mailbox.  (This was silly, BTW, as I was president of the Society of Physics Students, so she ended up giving it back to me a month later so I could get SPS mail.)

And what did this do?  Certainly reinforced that I didn’t want to work with this person, but I could also see it making a student feel like this person is representative of a particular field.  Wouldn’t you wonder if a student would not want to go into a major because of the way the professors treat him or her?  I can (and did!), and it just shows how ridiculous the whole thing was.

No, students need  some time to explore their interests and getting mad at them for not doing what you think they should do is silly.  They are the ones who have to deal with the consequences of their choices, and if a student takes my class and decides they don’t want to spend the next five to ten years of their life studying engineering, then I think they’ve learned something very important and just as valid as anything else I have to teach them.

Are grad classes a waste of time? February 5, 2012

Posted by mareserinitatis in education, engineering, geophysics, grad school, physics, research, solar physics, teaching.
Tags: , , , , ,
1 comment so far

I have seen both Gears and Massimo post comments about how grad classes are a waste of time.  Last week, Gears said this in his EngineerBlogs post (which I’d like to address several points, but this will have to suffice for tonight) and Massimo has suggested ‘workshop’ classes. I have to say that I disagree with both of them, but I think it’s because of my weird background.

For review, I did an undergrad in physics with a math minor, my masters in electrical engineering, and my PhD will officially be in geophysics (as was all my coursework) though my project is actually on solar physics.

Honestly, I’m not sure I could have done that without the coursework.  On the other hand, I think my attitude would be different if I’d stayed in one field. In my work in electrical engineering, I use almost every class I took, especially the grad courses.  I use antennas and microwave engineering a lot…so much so, that my circuits classes are probably the most rusty.  (I know, that’s completely backwards for an EE, but that’s how it goes sometimes.)  I find myself often wishing I’d had the opportunity to take some advanced signal processing, as well.  And one of the most useful courses was numerical techniques in electromagnetics.  Not only does it help me with the work I’m doing in EE, it’s also helping with many of the things I’ve run into looking at geo- and solar physics research.

The flip side to this is that if I’d continued on to get a PhD in EE, any further coursework would not have been terribly relevant.  I think there’s an optimum point, and that may have come earlier if my undergrad was in EE.

My classes in geophysics were not as useful, and I think there were probably 2.5 classes that had anything at all to do with my research and what I’m doing now.  Realistically, for the stuff I was interested in, I probably should have looked at a PhD in physics or astrophysics…but that may not have been much better if I was taking a bunch of classes on stuff that had no bearing on my research, either (which is likely).  However, the 2.5 classes that were useful have been REALLY useful.

I’ve got a breadth in classes that most students never get.  This is one thing that I think is a bit of a sticking point for some students.  Most places have a ‘breadth requirement’ – i.e. so many classes outside of their department.  I think this is a good thing as it helps people to see what other types of things could be relevant to their research.  I really think this is something that should be required because of all the ideas that come from seeing how different disciplines approach their fundamental problems, and even having some exposure to what those problems are is a benefit to students.

The real problem, in my opinion, is that so many places require a LOT of credits.  It’s fairly common in most good EE programs to require somewhere between 50 and 60 credits of JUST coursework.  I don’t like the idea of no classes, but I really think you could trim them back and just make students take classes that are relevant to their research as well as a couple classes for breadth.  I was very disappointed with my PhD program because once you hit advanced candidacy status, you’re not allowed to take any more classes unless your advisor is willing to foot the bill.  Not likely because most advisors want their students working on their research and getting done (not that I blame them).  The down side is that there are a couple classes that I could have really used but was unable to take because they didn’t fulfill the requirements for my degree.  Most of my classes had to be in the department as I’d already fulfilled my breath requirement, so taking a class here or there outside the department was viewed as a waste of time because they didn’t allow me to tick off some of those boxes in the red tape.  And of course, it becomes obvious that you would really benefit from a course once you’ve hit advanced status and can’t take any more.

It would be nice if there was a system where your advisor could sit down with you and figure out where you’re interested in going research-wise and plot a course through the classwork that makes sense and is flexible.  Wouldn’t it be nice if you discovered you need to learn about a particular topic and could then go take the course on it? It makes more sense to me than filling in boxes to get to a certain number of credits or hedging bets that something will be useful later on.

Let’s face it: research degrees are already very specialized and take a long time, so it would make more sense to cut the classes down to those that are relevant.  This would ideally save time without sacrificing the background required for a research project. Finally, a really good option, which more universities ought to allow, is independent study classes.  During my MS, I took one class as an independent study working on emag stuff.  It was awesome as I got the material I really needed in a more structured way and was able to do a project which (I’m still hoping) would be a foundation for some decent research down the line.  Therefore, I don’t feel grad classes are a waste of time, as long as they make sense, and I wish universities would be more flexible in some of their requirements.

The force is weak with this one… June 16, 2011

Posted by mareserinitatis in electromagnetics, geology, geophysics, physics, science.
Tags: electromagnetic energy,
2 comments

I had an interesting question from someone today: why do we use electromagnetics to study so many things?  Why can’t we use gravity or something similar?  Specifically, they were wondering about non-invasive methods for studying the human body.

It’s easiest to start with Newton’s Law of Gravitation, which tells us how much gravitational force one object (M) exerts on another (m):

and Coulomb’s Law (which explains the force of attraction between electrical charges, Q and q):

If we want to find the ratio of gravitational force to electrical force, we end up with something like this:

Now, let’s imagine we’re just looking at the gravitational and electrical forces between two electrons from 1 m away.  We use G=6.673•10-11 m3 kg-1 s-2 (the gravitational constant), M=m=9.11•10-31 kg (the masses of the two electrons), εo=8.854•10-12 C2 kg-1 m-3 s2 (the permittivity of free space, although may be just as easy to think of it as an electrical constant), and Q=q=1.602•10-19 C (this being the charge of an electron).  Using these values, all our units will disappear, which is good because we’re looking at a ratio of two forces and shouldn’t have any units, and we end up with a value of about 2•10-43.

What this means is that the gravitational force is 43 orders of magnitude smaller than the electrical force…or you could put a decimal point with 42 zeros and then a 1 behind it, and that’s how much smaller the force is.  When it’s already difficult to measure current values that contain many, many electrons (as compared to the two electrons we examined), it’s going to be impossible to find something that exerts a force that is 43 orders of magnitude smaller than what we can already pick up.

You can pick up small changes in gravitational forces when talking about large geophysical features – like ore deposits and mountain ranges.  In fact, they use this principle a lot in exploration geophysics, where they use gravimeters to look for mineral resources.  Our bodies are less sensitive than that, though, and can only pick up gravity when we are talking about changes in the size of planets or moons.  However, we are sensitive to changes in acceleration, so you can feel changes in gravitational pull when riding on an elevator, but that is because the change is both fast and of a reasonable size.

Anyway, the huge difference is why we are permeated (ba dum ching!) by devices that detect and use changes in electromagnetic radiation but not in gravitational energy.

It’s freezing; no wait, it’s melting… May 23, 2011

Posted by mareserinitatis in engineerblogs.org, geophysics, papers, research, science.
Tags: geodynamo, inner core, , outer core
add a comment

First order of business is to send you to EngineerBlogs.org where I posted today on how engineers who do simulation are not, in fact, inept experimentalists.  Just come back after you’ve read it (and commented!).

Are you done yet?

The other thing I wanted to mention was that I came across an article on LabSpaces about how Earth’s core may be continually freezing and melting.  I am interested because of implications for the geodynamo.  (As a side note, I haven’t read the paper directly, just commenting on the LabSpaces post.)

Earth’s outer core is composed primarily of molten iron, but there are some lighter elements in there.  The generally accepted theory is that most of the energy to power the geodynamo (which generates Earth’s magnetic field) comes from the freezing of the outer core.  It’s still really hot down there, but the pressure is so high that the iron can become solid.  As the iron freezes out, it releases energy.  Another source of energy is the rising of the lighter elements as they don’t freeze out.

There are some problems with this theory.  First is that the iron isn’t freezing out at a rate to produce sufficient amounts of energy to power the geodynamo.  That is, it provides some of the energy, but not all of it.  If this freezing out process were to produce the amount of energy needed to power the geodynamo entirely, it would have entirely solidified in about a billion years.  The planet has been here for about 4 billion, so obviously that’s not what’s going.  Second, the amount of energy generated by the inner core is proportional to its surface area.  This means that you would expect Earth’s magnetic field to increase over time as the inner core grew.  Experimental evidence suggests that Earth’s magnetic field strength was about the same, even 3 billion years ago.

The theory that the inner core is continually freezing and melting again might change some of the perspective on this.  If the core freezes and generates energy and then melts again, this could potentially explain why the core hasn’t frozen out and may lead one to believe the core may have been growing for longer than anticipated.  On the other hand, if the remelting process consumes a significant amount of energy, it could definitely not help with the energy balance issues.  If this process is consuming a lot of energy, then that may actually exacerbate the problem because that means more energy may need to come from some other mechanism.

Repost: Simulations December 29, 2010

Posted by mareserinitatis in computers, electromagnetics, engineering, geophysics, research, science.
Tags: ,
5 comments

After reading this post and participating in the discussion, I felt that perhaps reposting this from the old blog was in order.

After posting this morning about how I hate computers, I figured I should temper that.

One thing I hear an awful lot of is how people don’t trust simulations. (They also don’t trust math, but let’s take one thing at a time.)

An awful lot of science can be done through simulations. However, as soon as you tell someone that you got your science out of a computer program that feeds you data or makes pretty pictures, you may have just said you made science with your kid chemistry set and drew your data in crayons.

Skepticism about computer methods is a good thing as long as you know where to draw the line. A couple years ago, I went to a tutorial session on different computation methods used in electromagnetic compatibility (EMC). At the end of the tutorial, a spontaneous discussion about the reliability, drawbacks, and validation of simulations came up. I’ll summarize some of the main points and talk about how I have addressed them.

I guess the first thing to address is that there are many different methods to simulate things, and these methods have drawbacks. As an example from electromagnetics (EM), folks often use something called Finite Element Method (FEM). FEM is not unique as an EM tool…it was actually first developed to examine mechanical engineering problems (think stress and strain). It works very well for electromagnetics as well, with one caveat: whatever your modeling needs to be enclosed. If you don’t have an enclosed area (say a shielded box over a circuit), FEM can’t mesh space infinitely. There are methods that have been developed to deal with items when are radiating in open-space. One is called a Perfectly Matched Layer (PML) which matches the impedance your radiator sees at the edge of the space and then attenuates the field beyond that area.

I give this example because, as someone who has worked with antennas using FEM-based software, it’s important to understand these things. I didn’t, at first, and it took a lot of work to figure out if the software was even simulating correctly.

How did I do it? I used a method that everyone who is a good simulation researcher does: I validated my simulations. In antennas, I started out by modeling simple known devices to see if the results matched the theoretical value. Since the equations to compute these values are based on the same equations as the theoretical value, they should be pretty close. Next, as my devices increased in complexity, I used another computational EM code called Method of Moments (MoM). MoM is awesome because it works differently than FEM. FEM jumps straight into calculating fields while MoM calculates the currents on an antenna (for example) and then is able to compute field at any given point. Once I was able to get simulations that matched either an analytical result or the other code, I could be fairly certain that I’d gotten the kinks out.

Researchers in other areas (say, global climate change) validate as well. While I would assume their approach would have to accurately reflect any analytical results, they can validate more complex code by seeing if their code generates something fairly similar to actual events and known history.

The final step for validation, in my experience, is to take the code and run it using an example of something more complicated. Usually, this is the point where you start looking for interesting journal articles to reproduce.

Now, in all fairness, I know that people don’t always follow these procedures, which is where I believe people should start to be skeptical of results. In fact, the last step of validation can be the hardest even though it’s probably the most important. I know that in my short life time in computational electromagnetics, I’ve had the misfortune of coming across papers which predicted a result, but it’s totally different from my results. In a couple cases, I ended up writing authors to find out that they had misprinted some dimensions on something On the other hand, you don’t want to pursue that route until you’ve exhausted all your other options. In my case, moving part of a device by a just a few millimeters (at high frequencies, a significant chunk of a wavelength) changed the resonance frequency of the entire device. That’s why learning how best to utilize built-in placement functions rather than hand entering things is preferable.

However, those papers aren’t all that common (I hope…but I can say I haven’t hit too many). More often than not, good researchers have tried to test their code to make sure it is accurate and representative of that which they are trying to model. They have also reproduced previous known results to show that their method is sound.

The next time someone tries to tell you it’s just a model, you can reply by asking them how much they know about code validation. If you read this entire post, there’s a good chance you’ll know more about it than they do.

The geophysics (and 1 solar physics) linkety-link September 7, 2010

Posted by mareserinitatis in geophysics, science, solar physics.
Tags: , ,
2 comments

New View of Tectonic Plates: Computer Modeling of Earth’s Mantle Flow, Plate Motions, and Fault Zones: This article on Science Daily gives an overview of a new model that examines the interplay between mantle flow, tectonic motion, and fault zone behavior. (The original article is here, but it’s behind a pay wall.) The authors have taken an adaptive algorithm, which can create a finer mesh in areas where more detail is needed, and modified it so that it can be used on distributed computing systems. Many models utilize regularly spaced meshes. It would be really cool to develop a model that incorporates the behavior of all parts and scales of the Earth system, and this model may be a step in that direction.

ScienceNews had an article on what may have been an uber-fast magnetic field reversal. I’ll be interested to see what other people say on this one. One friend noted that the thermal history of the area is complicated and thus may not be a good candidate for this type of study, but I’m not sure how you could find this with something less complicated. Anyway, it would have some interesting implications if the field actually can flip this fast…or at least have an excursion.

Discovery News has an article on a proposal that the Yellowstone hotspot may have shredded the Juan de Fuca plate, thus slowing down the rate of subduction of the Pacific under North America.

And finally, Dave Jones from EEVblog sent this one out over Twitter: something from the sun, possibly neutrinos, might change the rate of decay for radioactive elements on Earth. That’s just cool.

Follow

Get every new post delivered to your Inbox.

Join 1,265 other followers