Wordy title, I know. But this is a bit of a nerdy blog post, meant for those of you on Chowhound to dissect or those of you who know a lot about metallurgy (or at least like the science of cookware) in the iron world to sink some teeth into. Or, for those of you who crave a bit more information about those Griswold and Wagners you covet at the local estate sale.
It is said that iron is the reason for a “dark age” in ancient times. This is not the traditional Dark Ages that we immediately consider – that of dark days of superstitions, loss of learning, and even the plague. This is an age when most relics of antiquity are lost, ambiguous at best, or completely unable to give us a proper piecing of history. The dark age is sometimes called the Catastrophe, a time period of roughly 1180 – 1300 BCE (Before Common Era, previously called “B.C.” before secularism completely abolished the use of the name ‘Christ’ in a scientific dating system.) that was brought about by what appears to be a number of roving possibilities. Hypothesis abound. Was it the ‘Sea People,’ destructively moving across the lands? Earthquakes that smote down the existing civilizations along the Levant? Drought? Changes in warfare? An interior systematic collapse of political and socio-economic chains? Or, perhaps, the discovery of iron?
Theories pile upon theories, and no one can necessarily be proven right or wrong. Still, I want to say that it is likely not iron that ended the late Bronze Age and ushered in the Iron Age of mankind. At the time, ironworking was simply not well understood, nor much spread across the cultures of the time. It was a heavily guarded secret. The King of Hatti, lord of the Hittites, a feared nation about on the same level as the Assyrians, closely guarded his ironsmiths’ secrets, as did those who worked the ironworks on nearby Cyprus. Due to the closely monitored trickle of information, the process by which iron could be pounded and bent to one’s will was not widespread until after the Catastrophe ended, and blacksmiths eventually spread across the known land to share their knowledge and their wares. Perhaps, as people realized the value of these pieces, the popularity spread. Or the fact that owning metal cookware was a sign of prestige was also a part of it. Or people just really liked their eggs cooked in some solid cast iron.
For hundreds of years, the only iron in use was wrought iron. That is to say, it is iron separated by heat from the original iron ore – much of which was found in various forms, from bog iron to sediment-based was easily accessible once the locals knew where to look – with the slag drawn off with flux of crushed seashells or limestone, and then pounded, heated, and pounded some more. The key point, in essence, is that any iron prior to the 15th century was made using wrought iron.
Iron in general is a highly oxidized metal. It holds onto its oxygen atoms more fiercely than other metals, meaning the amount of carbon monoxide that needs to be inserted into the ore to force out the oxygen must be great. Using a bellows chimney, iron workers could heat iron ore to a high enough temperature that would separate most of the metal from the slag, creating a black, porous bloom of what would today be considered very poor, impure, iron. The amount of carbon, oxygen, and impurities in the bloom made it, at first, extremely brittle and nearly unusable.
By re-heating the bloom, pulling it out of the fire and banging with a hammer, a smith could slowly force out the impurities, thereby removing the brittleness of the iron. Essentially, the heat made the iron malleable, and by hammering, the pores open yet could be welded together. This same process is how iron pots and kettles were made before blast furnaces were invented. A blacksmith would essentially heat and pound, reheat and pound metal pieces until they were strong (the removal of all those oxygen pockets meant the iron was no longer so brittle and easy to break) yet still malleable due to the high carbon content brought on by the constant firing of the iron bloom. The iron could be created into thin pieces, and bent over an anvil much the same way a tin or copper smith would bend thin pieces of their particular metal. The iron, of course, took far longer and was much harder to move than something as soft as copper. Still, once a sufficient bend and shape of the iron piece was made, the pieces were then heated together until they were bright hot orange and then could pounded and welded together. This technique is the way that many wrought iron pots and vessels were created prior to the advent of the blast furnace.
Wrought iron never truly went out of fashion, and as long as swords and weapons were required by civilization, a smith and his forge were always in demand. Of course, over time, smiths started to create something far stronger than wrought iron due to the incessant heating and immediate cooling they put upon the metal, plus the hammering, which removed so much carbon that it would today be considered steel. In ancient times, the amount of carbon removed and the way in which iron was cooled played a gigantic part in creating usable iron pieces, whether they be swords or more daily items. Regular iron – malleable, used for casting, machining or wrought iron, has 2.5% or more carbon in its make-up. It is still relatively brittle, especially compared with carbon steel. Carbon steel (as long as it has more than .3% carbon) contains both malleable ferrite and hard particles named cementite. When heated to glowing, the carbides dissolve, and the ferrite heavy iron is then called austentite. At this point, when the iron’s graphite/crystal structure is hot and jagged, a fast cooling results in martensite, a very tough form of iron which is structured like needles under a microscope. This more fractious molecular structure also makes the iron brittle. The smith who controlled the cooling process could sometimes save the cementite from completely disappearing into the iron, cooling them inside the martensite, and their rounded structure of the carbides created a tempering that made the steel incredibly stronger than other types of iron available.
Typically, the smiths did not understand the science behind their process – perhaps rare was the blacksmith who understood it was just as, if not more essential that it was the cooling process that created a far superior grade iron vs the heat of the fire (which also played a role) and strength of the arm working the metal.
Today, we use alloys to try to control the grade of iron we use for various applications. I’ve used the term ‘alloy’ before in other posts to infer that something is not pure. And while cast iron can have many grades and be heated and hardened differently, the alloys added to the metal are still completely organic ones that simply make the iron behave a certain way once cooled and, possibly, machined. Cast iron generally has over 2% of carbon (C) as the main alloy in its metallurgic make-up, and usually 1-3% silicon (Si). This combination of pure iron, carbon and silicon provides a malleable iron excellent for casting and later machining (usually after a secondary heating process). By its own nature, iron’s use of carbon atoms creates graphite, and the shape of the carbon/graphite within the structure of the iron is the other prime way that one can decipher if the iron is grey, ductile or steel. Grey’s atoms, for instance, are a linear, shard-like pattern which also creates grey iron’s brittleness on a molecular level. Ductile, while made of similar particles as grey iron, has graphite in nodular, rounded shapes, which gives ductile iron (and cast or carbon steel) its strength and its ability to withstand well.
Is all that enough science for the day? Heck, it’s enough for me. ????