The SR-71 Blackbird: A Story of Remarkable Innovation

Artist's illustration of the SR-71 aircraft
Computer-generated 3D illustration of the Strategic Reconnaissance Aircraft SR-71 Blackbird. Photo: iStock

The Lockheed SR-71 Blackbird is one of history’s most iconic spy planes. Also known as the “Black Widow” for its unique appearance, this aircraft still stands as an impressive feat of aeronautical engineering. In fact, it holds many speed and altitude records that have yet to be broken, and there is much more than meets the eye with this plane…

The Origins of the SR-71 

U2 Spy Plane in the air
U2 Spy Plane. Photo: Wikipedia/USAF Public Domain

Before this immortal aircraft was developed, the United States relied on the famous U2 spycraft for its Cold War reconnaissance. On May 1, 1960, a U2 was spotted deep inside Russian territory, but the US was not concerned as they believed that this aircraft was impenetrable to Soviet air defenses due to its high-altitude flight. They were wrong. 

A Soviet V-750 surface-to-air missile shot down the spy plane. The pilot, Francis Gary Powers, who took off from a secret US airbase in Pakistan, parachuted to the ground safely but was immediately captured by Soviet authorities and taken prisoner. 

He was later released after a mutual prisoner swap between the United States and Russia; however, it was quite clear that something else had to be done if the US wanted (and needed) to continue its reconnaissance over Russia and other foreign lands without the concern of the aircraft being shot down.

Corona Spy Satellite

Corona spy satellite illustration
Illustration of the Corona Spy Satellite. Photo: Wikipedia/National Reconnaissance Office, Public Domain

The United States began an ambitious project for U2’s successor. The Corona spy satellite program was one of the first. It proved amazingly successful in August of 1960 after it was able to photograph many parts of Soviet territory.

What’s even more amazing was that the pictures that the plane took was sent back to earth and successfully salvaged, resulting in an abundance of intelligence well needed as this cold war intensified. The Corona program ended in 1972.

A-12 Spy Plane

A-12 Prototype Spy Plane in the Air
A-12 Prototype. Photo: By U.S.Air Force – Defense Visual Information Center (DVIC)

The CIA contracted Lockheed to develop a new plane that would surpass the U2’s functionalities in every way. The Lockheed A-12  was born.

This prototype spawned some variants. The YF-12A Interceptor, which was designed to replace the F-106 Delta Dart Interceptor/ fighter, and the SR-71 Blackbird, designed not as a fighter jet, but as a high-speed reconnaissance aircraft.

The YF-12A was built and tested but the Air Force decided to go for the F-111 fighter/bomber; however, the SR-71 was commissioned and 32 Blackbirds were eventually built.

The SR-71 was outfitted with all the advanced concepts from its A-12 parent, as well as the necessary devices (cameras and supporting equipment) for its intelligence mission to fly over foreign territory (namely the Soviet Union). This plane was able to fly much higher than the U2 and it flew  four times faster. To this day, no aircraft has surpassed the speed of the SR-71 Blackbird.

Enter Skunk Works

Assembly line of the SR-71 Blackbird at Skunk Works
Assembly line of the SR-71 Blackbird at Skunk Works. Photo: Wikipedia Public Domain

This top secret R&D group within Lockheed Corporation began during WWII to research advanced fighter aircraft, but its true meaning did not materialize until after the U2 was shot down.

As mentioned, it was evident that a more sophisticated aircraft that would be able to avoid Soviet planes and missiles, as well as being less vulnerable to radar signatures was required, or to put it another way, this new prototype had to be faster, higher, and stealthier than any other aircraft currently in existence at the time. The Skunk Works design team was tasked with creating this advanced aircraft. 

Development of the SR-71

Pratt & Whitney Engine for the SR-71
The Pratt & Whitney J58 engine powered the SR-71 Blackbird. Photo: iStock

The design that Skunk Works had come up with was a radical break from conventional aircraft design. This plane would have a long, curved nose that would house a long-range camera and a shorter curved section behind that would house the pilot.

The idea behind this design was that it would significantly reduce the plane’s radar cross-section. Most of the aircraft’s volume would be behind the center of gravity, making the aircraft “lighter” from the perspective of radar. This would reduce the aircraft’s weight and make it fly even faster.

The plane would also be designed to minimize airflow, reducing drag and increasing speed. And all of this would be done with a plane that could carry almost 10,000 pounds of fuel and up to 10,000 pounds of payload. 

Its futuristic profile made it difficult to detect on radar. Even the black paint used, full of radar-absorbing iron, helped hide it’s existence from the Russian’s radar defenses. Due to the plane’s unique design, some engineers viewed it more of a spaceship than an aircraft. 

The mineral titanium was one of the main reasons for the SR-71’s success. This metal is almost as strong as steel, but lightweight enough not to allow the plane to fly and maneuver very well. Titanium is also able to withstand the enormous temperatures when flying at 2,200 mph (3,540 kph). 

And all this was done before digital functionality became commonplace.

Titanium and the Soviet Union

Photo of titanium
Titanium in Alloy Form. Photo: iStock

Despite the fact that titanium is the ninth most common element in the earth’s crust, it’s resources are lacking in the United States. And ironically, of all the places where this mineral is abundant is in the Russian territories, so the United States created dummy companies to hide who was actually the purchasing of this needed mineral.

The result was that the US succeeded in importing titanium from right under the nose of the the Soviets, and used it build a an aircraft that would eventually fly over their land and spy on them. How ironic!

Specifications of the SR-71 Blackbird

Inside SR-71
Cockpit of the SR-71. The display is all analog. Photo: iStock

This aircraft was truly an extraordinary feat of engineering, and it had many specifications that would go on to set records and even become standards for future planes.

It had a crew of one and could fly at Mach 3.2 (2,455 MPH) at a height of 85,000 feet. That is almost halfway into the Earth’s stratosphere, and with a fuel capacity of 36,000 pounds, and it could fly for over 2,500 miles without having to refuel. 

Because it was designed to fly at very high altitudes, the SR-71 was pressurized, allowing the pilot to fly without a spacesuit. While flying at those altitudes, the plane would also be able to fly through weather that other aircraft could not fly in. 

How Fast is the SR-71?

Computer generated 3D illustration with the American Reconnaissance Aircraft SR-71
Computer generated illustration of the SR-71 Reconnaissance Aircraft SR-71. Photo: iStock

As mentioned this aircraft could fly at Mach 3.2. That’s faster than a bullet! Because the plane was streamlined, it was able to fly at those speeds without creating dangerously high pressures on the airframe. And this meant that the aircraft was able to maintain its altitude without using a lot of fuel to keep itself aloft.

This was a massive advantage for the SR-71, as it would let the aircraft fly for hours before needing to refuel. The speed record was set by retired Air Force colonel Bob Gilliland, who flew it from New York to London in 64 minutes, smashing the previous record. This equates to an average speed of 2,189 mph, which is still faster than any aircraft in service today.

Other Innovations by the Blackbird

As if breaking speed and altitude records weren’t impressive enough, the SR-71 also pioneered many other technologies that are still in use today. Here are some examples.

    • The SR-71 used a special fuel to cool itself, which is now used in many modern engines.
    • It had a special paint that didn’t reflect visible light or infrared light, making it incredibly stealthy.
    • The plane’s cockpit was also extremely advanced, with a heads-up display that projected critical information directly onto the windshield.
    • The navigation system was revolutionary, using Doppler beacons to accurately calculate the plane’s position.
    • The plane’s way of communicating with ground control stations was unlike anything used before. It had a special method of transmitting information as bursts of radio waves that could be received by a single ground station at a time. This was necessary because the aircraft had no way of knowing which ground station it was closest to.
    • The plane had a special method of using the airflow over the aircraft to cool its engines, which was necessary to prevent them from overheating at the plane’s high speeds.

Conclusion

The SR-71 Blackbird was one of the most advanced aircraft ever created. It pushed the boundaries of aeronautical engineering, and even in the modern digital age, it is still a very impressive machine.

This supersonic aeronautic advancement was extremely efficient and could travel long distances at supersonic speeds while carrying heavy payloads.  It was also extremely stealthy, making it a difficult target to see and track.

Despite having been decommissioned in the 1990s, the SR-71 still holds much impressive speed and altitude records. It truly is one of the most impressive aircraft ever created and deserves its place as a legend in aviation history.

 

Titanium – What is It and What is It Used For?

Photo of titanium
titanium metal alloy, used in industry, super resistant metal. Photo: iStock

Overview

In the last few years, there has been a lot of buzz about the metal known as titanium. The reason is that it has quite a few properties which make it useful in everyday life.

It is strong, lightweight, and corrosion-resistant among other things. It is most popular for being used to create aircraft parts and car engine components; however, there is so much more to this metal than meets the eye. 

People have used titanium for thousands of years. Only recently we have begun to understand exactly how useful this mineral can be. It was found to be extremely useful for military stealth functions, starting with the famous SR-71 reconnaissance aircraft, due to the metal’s strength and high-temperature resilience (as we will discuss below) and the fact that it is light weight (e.g. in this case, functioning as a material that is very strong but light), it was perfect for this spy plane. 

Let’s take a look at some interesting facts about titanium.

Properties Stronger than Steel

You might have heard that titanium is as strong as steel. While this is not entirely true, it is close enough to be significant. To begin with, strength is not a single chemical property of a material. But for simplicity, let’s treat it as one. 

The tensile strength (measurement of a material’s elastic stress when a load is placed on it – how much it can withstand before starting to stretch or pull out before breaking apart) of steel is around 100 gigapascals (GPa) – the unit of measurement of tensile strength. (One pascal is equal to 1 newton of force per square meter).

The tensile strength of titanium is about 60 GPa. Therefore, steel is stronger than titanium. However, the thing to note here is that titanium’s strength is applied only at a very specific point. Let’s say that you have a piece of metal that has a high tensile strength across its entire surface. This does not make it stronger than a piece of metal with a lower tensile strength applied at a specific point.

Titanium Symbol
Titanium (Ti) has 22 electrons and 22 protons. Photo: iStock

Chemical Properties of Titanium

Titanium has a lot of unique properties that make it special. It has a very high melting point (more than 3,000 degrees Fahrenheit). Because titanium resists oxidation at high temperatures, it is often used in high-temperature applications.

Oxidation is the loss of electrons, resulting in the titanium atoms becoming vulnerable to combining with other atoms; subsequently changing its properties and compromising the material.

A perfect example of using titanium for its resistance to oxidization   at high temperatures is that it makes an excellent material for the SR-71 since this plane had the ability to fly at Mach 2.5, which is close to 2,000 miles per hour. This metal is also corrosion-resistant. This means that titanium is very useful when exposed to water or air. 

Titanium has an atomic number of 22 and an atomic weight of 47.867, which means it has 22 protons and approximately 48 protons and neutrons, respectively.

Everyday Uses of Titanium

Titanium is being used in many different industries, and there are several everyday uses of titanium that you may not be aware of. This is because titanium is lightweight, strong, and corrosion-resistant, making it the perfect material for sports equipment.

    • Sports equipment – If you are a sports fan, you may have seen athletes wearing titanium-containing sports accessories. 
    • Medical equipment – If you ever get an MRI scan, you may be inside a machine that is made of titanium. This is because titanium is very safe to use around living tissue and can be sterilized easily. 
    • Marine parts – If you own a boat, you may be surprised to learn that the propellers and rudders are often made of titanium. This is because it is strong, lightweight, corrosion-resistant, and does not affect water flow. 
    • Water and air purification – You may have seen pictures of large towers in cities. These towers are used for water purification and are often made of titanium. 
    • Construction – Buildings, bridges, and other infrastructure are often constructed using titanium. This is because it is highly corrosion-resistant and very strong. 
    • Food packaging – If you have ever eaten food that was in a pouch, there is a good chance that the pouch was made of titanium.

How is Titanium Produced?

Titanium is made through a process known as the Kroll process. – First, titanium ore is mined and then sent to a smelter where it is heated to extremely high temperatures.

The resulting molten metal is then sent through a chemical reduction process which removes oxygen and other impurities. The molten metal is then cast into ingots and then rolled into long bars. These bars are then drawn through a press that elongates them and makes them thinner. Finally, the bars are shaped into their final forms and then sent to be coated or processed further.

Problems with Manufacturing and Existing Processes

As you have read, titanium is a very versatile material that can be used in a wide variety of industries. However, there are some issues with the current methods of manufacturing this mineral that needs to be addressed. 

    • High costs – Currently, the process of producing titanium is very energy-intensive and expensive. The cost of the metal itself is also quite high, making it costly to produce certain products. 
    • Contamination – The process of manufacturing titanium is quite complex, and there is a risk of contamination in certain areas of the process.  
    • High purity requirements – Another issue with titanium is that it has very high purity requirements. This means that the resulting metal can be very impure even after the purification process. 
    • Difficult to produce large quantities of titanium in the quantities needed for the industries that use it.

Concluding Words

Titanium is a very versatile metal that can be used in a wide variety of industries. However, due to its high costs and difficult manufacturing process, it is often difficult to produce large quantities of titanium. With that said, titanium is used for very specific functions. This article has explored the many uses of titanium and the process behind its manufacture.

 

5 Buildings that Use Cantilever Architecture

Citicorp Tower looking up
Citicorp Tower, NYC. Photo: Wikimedia CC
Citicorp Tower cantilevers
Citicorp Tower cantilevers. Photo: Wikimedia CC

In the 19th century, with the advent of structural steel, engineers began using cantilevers to construct taller buildings. This type of architecture is primarily used when there isn’t enough space on one side of a structure for its foundation. Engineers have to build the foundation out from one side and then use beams that extended from it to support the weight. 

This construction style is eye-catching and certainly more daring than other methods of building. It also requires serious engineering skills, as well as a detailed understanding of how much weight the beams can bear without giving way. Indeed, the correct structural engineering is imperative as just a small miscalculation in the production of steel and concrete can result in catastrophe.

‍If you live in a big city, you might have noticed that more buildings are being built with these overhangs. This is especially true for cities where space is at a premium, such as New York City.  In this article, we are going to take office building construction to a whole new level – the use of cantilevers!

The Rotterdam Tower

De_Rotterdam Tower showing cantilevered construction
Photo: WikiPedia-CC

This intriguing building is located in the Netherlands and is part of the Erasmus Bridge Complex. It is a mixed-use building that houses offices, a hotel, and apartments. The building has a cantilever design, which is why the residents can enjoy a gorgeous view of the river

The architects designed the building so that it extends out over the river and almost touches the bridge. They also designed it so that it is taller on one side. The weight of this building is distributed between its central core and its cantilever, which is why it can be so tall without the ground beneath it being affected.

Statoil Regional and International Offices

Statoil is an energy producer in Norway and the 57th largest company in the world. Norwegian architects A-Lab designed a 117,000-square-meter commercial building complex that fits into the picturesque shoreline of Fornebu in perfect harmony.

Additionally, this architectural expression injects new energy into the nearby park and commercial area and was a key challenge in their design. Of course, it is the overhangs that make the building stand out. They stretch up to 100 feet in many directions.

Marina Bay Sands Hotel

The Marina Bay Sands Hotel is considered one of the most impressive hotels in the world. It is a massive construction project that began in 2003 and was completed in 2011. The project was a collaboration between the Las Vegas Sands Corporation and the Singapore government, and was built on the site of a former shipyard. The hotel has three 55-story towers. but in addition these buildings, it has a sky park that is cantilevered over all three towers.

Designed by Israeli architect Moshe Safdie,  the hotel has 2,500 rooms and a lobby that crosses the entire three buildings just like the sky park above.

Marina Bay Sands Hotel Sinagpore

Marina Bay Sands Hotel by architect Moshe Safdie. Photo by Julien de Salaberry on Unsplash

The Shifting Sands of Singapore 

One of the most unusual aspects of the construction of the Marina Bay Sands Hotel was that the builders had to constantly monitor the ground and adjust the foundations as the ground shifted. This is common in areas where there is a lot of water, such as Hong Kong and the Netherlands. However, it is unusual for the ground to shift to the same extent in an area that does not experience major flooding.

One theory is that the government has been dredging sand from the bay for years in order to extend the shoreline, which may have caused the ground to subside. Another theory is that the weather changed and the region experienced a period of unusually heavy rain. Whatever the cause, the shifting sands proved to be a major challenge to construction of the hotel.

Building the Hotel

One of the most interesting aspects of the construction of the Marina Bay Sands Hotel was that builders used an unusual design that allowed them to build upwards while keeping the foundations stable

This was necessary because Singapore is built on a floodplain, and it is impossible to build foundations below ground level. The builders designed the foundation so that the bottom of the hotel would be built on a metal mesh, which would be anchored to the ground. The metal mesh would keep the foundation stable, while allowing sand and water to flow freely through it. The metal foundation is built in modular sections, which can be raised and lowered as necessary. The builders also used a system of shuttles to transport construction materials to the upper floors of the hotel, as well as the rooftop.

Lessons Learned from MBS’s Construction

As we have seen, the construction of the Marina Bay Sands Hotel was a challenge. It is rare for the ground to shift so dramatically in an area where there is no flooding, and it is even more unusual for builders to build on top of a metal foundation. Although this construction project was unique, it still provides some important lessons for other builders.

The first is that challenges are an inevitable part of construction, and there are always a number of factors that have to be taken into account. The second is that challenges should not be seen as a reason to abandon the project. When building on the water, the builders of the Marina Bay Sands Hotel had to be flexible, and ready to make adjustments at any time. If they had been too rigid, they may not have been able to proceed with the project at all.

One Vanderbilt – New York City

Vanderbilt Office Building under construction
Vanderbilt Office Building under construction. Photo: SS

With space so much at a premium in this city, the only way to build is to go up, and even then, it might not be enough to encompass the amount of office space that the developers envisioned for the Vanderbilt tower.

Located across from Grand Central Terminal, it is the fourth tallest building in NYC, rising 1,401 feet above the ground. On the south and west sides, it is cantilevered over Vanderbilt Ave. and 42nd Street respectively, and this overhang starts at only approximately 50 feet up and then supports the rest of the superstructure. There is an observatory at the top, which is the 5th observatory in Manhattan. 

Other skyscrapers with noticeable cantilevered construction in New York include Central Park Tower and the Citicorp Headquarters, displayed above. 

Frank Gehry’s Chiat/Day Building

Binoculars Building, Los Angeles
Binoculars Building, Los Angeles, CA. Photo: Wikimedia CC

This building is a former office building in Los Angeles, California that was converted into a mixed-use building. It is now home to a variety of businesses, as well as the famous advertising agency Chiat/Day.

Designed by notable architect Frank Gehry, this building with a cantilever on one side so that it could house all of the businesses. They designed the cantilever so that it wouldn’t cause damage to the building’s foundations.

The building’s cantilever also allowed designers to create an interesting façade. They were able to extend the second floor out so that it creates a terrace, which is accessible from the sidewalk.

Summing Up

The cantilever is an interesting architectural feature that many people likely do not think about as they walk under these overhangs, but it is a complex engineering solution that isn’t suitable for every project; however, in these examples, it works brilliantly.

While they may be pretty to look at, they also serve a critical  function, which makes them a necessity. While the specific structural design of each cantilever will vary depending on the building type, design, and geographic location, the overall concept is the same.

 

 

James Webb Telescope – What is it?

Carina Nebula
NGC 3324 in the Carina Nebula Star-forming region from James Webb. Photo: NASA Public Domain

A Giant Feat for Mankind

By far, the most extraordinary images from outer space that have ever been received have come from the James Webb telescope. As the successor to the famous Hubble Space Telescope, the James Webb is the most powerful space observatory ever built, with far more potential than anything that has come before it.

Launched on Christmas Day, 2021 on the Ariane 5 rocket, this giant observatory, the size of a tennis court, is currently in L2 Orbit, located 1.5 million miles from Earth, sending extraordinary images of objects from as back into time as when the big bang started -13.7 years ago. 

To understand why this matters so much to humanity, we first have to understand what the JWST is not. It is not a souped-up version of the Hubble; nor is it an alternative to Hubble — something different but still essentially the same.

Instead, the JWST represents a completely new paradigm in design and function for a space-based optical telescope. In other words: It’s like nothing we’ve ever seen before.

How Does the JWST Differ from Hubble?

James Webb Telescope
JWST in space near Earth. James Webb telescope far galaxies and planets explore. Photo: iStock

The two telescopes, while both space-based observatories are very different in two significant categories.

    • Mirror size
    • Light spectrum

Size Does Matter!

There is a major difference between the JWST mirrors and the Hubble’s mirrors in size. As discussed further in the article, the bigger the mirror, the further back into space we can see.

James Webb Telescope mirrors compared to Hubble's mirrors
James Webb Telescope mirrors compared to Hubble’s mirrors. Photo: Nasa.gov

As a result, this amazing observatory is also about 10 times more powerful than Hubble, with a much wider field of view — and, therefore, able to observe more objects.

Electromatic (Light) Spectrum

The JWST is designed to observe light in infrared wavelengths. Being able to see objects not usually visible by humans, whereas Hubble primarily observes visible and ultraviolet light. 

This is significant because only a very small percentage of the universe’s atoms emit visible light, while almost all atoms emit infrared light. As such, the JWST — in conjunction with other telescopes that are observed in other wavelengths allows us to view a much bigger chunk of the universe than Hubble ever could.

In addition to infrared, the JWST also has a small segment that observes a type of ultraviolet light that is inaccessible to Hubble.

Why is the JWST Important?

The JWST is a completely different kind of telescope that exploits a different approach to astronomy and will, therefore, produce many different results.

With its ability to detect light from the first stars that ever formed in the universe and the first galaxies that ever formed after the Big Bang, it will, for the first time, give us a comprehensive picture of the evolution of the cosmos. 

The JWST will also allow us to look for the earliest signs of life beyond our planet and, as such, represents a major step on humanity’s path toward enlightenment, as well as a greater understanding of who, what, and where we are.

The Telescope Assembly

The observatory is primarily composed of three components:

    •  Integrated Science Instrument Module (ISIM)
    • The Spacecraft Element
    • The Optical Telescope Element (OTE)

Integrated Science Instrument Module

This is where the infrared components are. It contains the infrared camera and the spectrograph (device which separates incoming light by its wavelength (frequency).

 

James Webb Infrared Component
James Webb Infrared System. Photo: NASA

The Fine Guidance Sensor/ Near InfraRed Imager and Slitless Spectrograph is used to pinpoint the locations that the JWSP will look at.

The Optical Telescope Element (OTE)

This is where the mirrors are contained. The mirrors are the most significant part of the telescope. Simply put, the larger the mirror, the further back in space we can see and with greater detail,  More specifically, the size of the mirror is directly proportional to the sensitivity (detail) that the telescope can display. The larger it is, the more detail is will show.

This amazing high-tech instrument consists of hexagonal-shaped mirror segments that measure over 4.2 feet across and weighs approximately 88 pounds. It has 18 primary segments that work in symmetry together to produce one large 21.3-foot mirror.

The mirrors are made of ultra-lightweight beryllium, which was chosen due to their thermal and mechanical properties at cryogenic (low) temperatures, as well as beryllium’s weight which made it a lot easier to lift it into space.

James Webb mirror assembly
James Webb mirror assembly. Each segment has a thin gold coating chosen for its ability to reflect infrared light. The largest feature is the five-layer 80 feet long and 30 feet wide sun shield that dissipates heat from the sun more than a million times. Photo: NASA

“The James Webb Space Telescope will be the premier astronomical observatory of the next decade,” said John Grunsfeld, astronaut and associate administrator of the Science Mission Directorate at NASA Headquarters in Washington. “This first-mirror installation milestone symbolizes all the new and specialized technology that was developed to enable the observatory to study the first stars and galaxies, examine the formation of stellar systems and planetary formation, provide answers to the evolution of our own solar system, and make the next big steps in the search for life beyond Earth on exoplanets.”

Amazingly, the mirrors will fold in order to fit into the spacecraft and then unfold when ejected into outer space.

“After a tremendous amount of work by an incredibly dedicated team across the country, it is very exciting to start the primary mirror segment installation process,” said Lee Feinberg, James Webb Space Telescope optical telescope element manager at Goddard. “This starts the final assembly phase of the telescope.”

Bill Ochs, James Webb Space Telescope project manager said “There have many significant achievements for Webb over the past year, but the installation of the first flight mirror is special. This installation not only represents another step towards the magnificent discoveries to come from Webb but also the culmination of many years of effort by an outstanding dedicated team of engineers and scientists.”

The Spacecraft Element

Something must power this system and the spacecraft element is what does it. Is supplies the rocket thrusters, propulsion system, communications and all the electrical power needed to make this run as a well oiled machine.

Where are We Now?

SMACS 0723A galaxy cluster. Furthers image recorded from James Webb telescope
Deepest Infrared Image of the Universe Ever Taken. Photo: NASA Public Domain 

We will leave you with this. Galaxy cluster SMACS 0723, which contains thousands of galaxies is 4.6 billion light years away.

That means that we are looking at it the way it looked 4.6 billion years ago. Scientists have a lot of work ahead of them and who knows what they’ll find?

Space Shuttle Columbia History

Rocket Garden Kennedy Space Center
Cape Canaveral, Florida – March 2, 2010: The Rocket Garden at the Kennedy Space Center. Eight milestone launch vehicles from KSC’s history are displayed. Photo: iStock

With the advent of NASA’s new planned trips to the moon and Mars and Elon Musk jumping in with his successful Space-X program, we’d thought it would be a good time to look back at how we got to this point and what better way to begin but with the Space Shuttle program. (Yes, we can go back further to the Saturn V and the manned moon trips but we will in a separate article because such a major achievement deserves its own space (put intended 😃)

Space Shuttle Overview

Space Shuttle Columbia from its 16th flight landing at Kennedy Space Center
Space Shuttle Columbia from its 16th flight landing at Kennedy Space Center Photo: Wikimedia Public Domain

The space shuttle Columbia was the first of the shuttle crafts to be launched and ultimately became a feat of engineering excellence. It was the most complex machine ever built to bring humans to and from space, and which has successfully expanded the era of space exploration. It lead to two decades of an unsurpassed legacy of achievement.

The difference between the shuttle program and previous rockets that went into space was that these aircraft were designed to be used over and over again. Columbia completed 28 missions over a 22-year span.

In the Beginning

The Columbia Space Shuttle was named after a sailing vessel that operated out of Boston in 1792 and explored the mouth of the Columbia River. One 975 in Palmdale, California, was delivered to the Kennedy Space Center in 1979.

There were many problems with this orbiter initially and this ultimately resulted in a delay in its first launch, but finally, on April 12, 1981, the shuttle took off and completed its Orbital Flight Test Program missions, which was the 20th anniversary of the first spaceflight and first manned human spaceflight in history known as Vostok 1.

Columbia orbited the Earth 36 times, commanded by John Young, a Gemini and Apollo program veteran, before landing at Edwards Air Force Base in California. 

The Mission

Columbia was used for research with Spacelab and it was the only flight of Spacehab‘s Research Double Module. It was also used to deploy the Chandra observatory, a space telescope.

Columbia’s last successful mission was to service the Hubble Space Telescope launched in 2002 and was its 27th flight. Its next mission, STS-107, saw a loss of the orbiter when it disintegrated during reentry into the atmosphere and killed all seven of its crew.

February 1, 2003

NASA Columbia Crew
The STS-107 crew includes, from the left, Mission Specialist David Brown, Commander Rick Husband, Mission Specialists Laurel Clark, Kalpana Chawla, and Michael Anderson, Pilot William McCool, and Payload Specialist Ilan Ramon. (NASA photo. via Wikipedia)

After a successful mission in space, the seven members of the Columbia began their return for reentry into Earth’s atmosphere, but something was about to go terribly wrong.

On this date, February 1, 2003, a small section of insulating foam broke off the shuttle. At first thought, one would think that this would not be a major problem, but when it comes to space flight and all the engineering complexities that come with it, one small defect can lead to disaster, and sadly, that is exactly what happened.

After months of investigation, it was determined that the reason for the foam breaking away from the Shuttle was due to a failure of a pressure seal located on the right side of the rocket booster.

This was the second disaster where we lost astronauts during space shuttle flights. The first was during a Challenger mission on January 28, 1986. This author distinctly remembers watching the take-off of the Challenger and then hearing a large expulsion. Everyone knew at that moment in time, that something was wrong.

The Result

The benefits that humankind has gained from these shuttle flights were enormous. There were missions directly involved in launching and servicing the Hubble Space Telescope, docking with the Russian space station Mir, as well as performing scientific experiments that have ultimately benefited all of us.

In 2011, President Bush retired the Shuttle orbiter fleet and the 30-year Space Shuttle program in favor of the new Constellation program, but there were many costs and delays with this program and subsequently, it was canceled by President Obama in favor of using private companies to service the International Space Station. From then on, U.S. crews accessed the ISS via the Russian Soyuz spacecraft until a U.S. crew vehicle was ready

Today, we are experiencing achievements never before considered a reality within our lifetime. From the amazing photos from the James Well telescope to our planned missions to the moon and Mars, we have to credit those who came before these missions who deserve all the credit, lest we forget the ones who ultimately gave it all for the benefit of humankind!

 

 

What is the Atom Made of?

Did you ever see the movie “The Incredible Shrinking Man”? If you have, did you ever wonder what would happen to him when he gets so small that he would be the size of an atom? And if so, could he get any smaller?

Maybe we have the answer because atoms are particles that exist in nature that cannot be broken down any further into smaller components. Everything we see around us is made of atoms, from tables and chairs to people and pandas. 

What Makes Up the Atom?

Illustration of the Atom
Atoms consist of three basic particles: protons, electrons, and neutrons. Nucleus. This atom has a neutral charge as it contains the same amount of protons and electrons. Photo: iStock

Comparatively speaking, atoms contain mostly empty space, but don’t let that fool you into thinking they are not important. The components of the atom and what makes up the atom are fundamental to our understanding of how matter is assembled. That includes living organisms, both here on earth and elsewhere. 

Now let’s talk about the components. A typical atom consists of a nucleus in its center. This nucleus contains neutrons and protons (together they’re called nucleons). Protons have a positive charge. Neutrons have neither positive nor negative charges. They are ‘neutral’.

Surrounding the nucleus are electrons, which are bodies outside of the nucleus and orbit around it, the same as our planets orbit their sun. Besides the size difference in this comparison, the only major difference is that the planets orbit the sun because of gravity, and electrons orbit their nucleus because of magnetism.

The Electron

An electron orbits the nucleus of the atom. The are negatively charged particles. The electrons are the only particles outside of the atom’s nucleus.

Neutral Atoms

A neutral atom doesn’t have any charge, so it doesn’t interact with other atoms. You can think of it as a bag of protons, neutrons, and electrons that just float around in space. Most neutral atoms are made up of an equal number of protons, neutrons, and electrons. For example, hydrogen has one proton, one neutron, and one electron. Helium atoms have two protons, two neutrons, and two electrons. This is why we usually refer to these atoms as neutral.

Ions

Any time an atom loses or gains an electron, it becomes charged. If it loses an electron, it becomes positively charged. because there are more protons in the atom than electrons. If it gains an electron, it becomes negatively charged. 

When atoms gain or lose a particle, they can bond together with other ions to form other elements.

Let’s also state that regardless of the number of electrons or protons that are lost or gained, the ‘makeup’ of the atom is associated to the number of protons as we will see below. 

The Proton 

Periodic table
Image by Calua from Pixabay

Protons are mainly found in the nucleus, although a few may be found in the outer electron orbit. The number of protons in an atom is what makes it what it is. For example, the elements in the periodic table have numbers associated with them. The number on the upper right corner defines its atomic number; that is, it tells us the number of protons in that element. Atomic weight is the number of protons and neutrons together.

Neutrons

The neutron’s only job is to protect the proton from becoming too positively charged. It doesn’t matter if the atom has too many or too few neutrons; it’s fine either way. The neutron doesn’t interact with electrons or anything else outside the nucleus, so it’s usually just along for the ride.

All Together Now

The negative charge of the electrons and the positive charge of the protons are what maintain the orbit of the electrons around the nucleus. This is referred to as an electrostatic charge, or electromagnetic force, or to put it another way, it is the attraction of the positive charge from the negative charge of the electrons that causes this orbit to exist.

Now, let’s drill down to more specifics of the atom’s components and how their respective charges make up different types of atoms. 

Conclusion

Atoms are the smallest particles of matter that cannot be broken down into smaller components. Everything we see around us is made of atoms. Atoms are mostly empty spaces, but they’re fundamental to our understanding of how matter works. A typical atom consists of a nucleus with neutrons and protons (together called nucleons) inside it, as well as electrons that orbit the nucleus. The electrons have a negative charge; the nucleons have a positive charge. 

Neutral atoms are made up of an equal number of protons, neutrons, and electrons. Ionic compounds are made up of positively charged ions and negatively charged electrons, and they have a strong attraction to other atoms and molecules. Electrons are negatively charged particles that orbit the nucleus, making them useful tools. Atoms are the building blocks of everything in the universe, and they are fundamental to our understanding of how matter works.

The Hoax of the The Moon Landing Hoax

Illustration of the accusation that the moon landing was a hoax
Cartoon illustration of the 1969 moon landing, with a depiction that is nothing more than a Hollywood stunt. IStock

The CIA was involved in the JFK assassination, the US government was behind the 9/11 attacks, there are space aliens among us, and so many more, including one that is particularly disturbing and which was initiated by conspiracy theorist host talk show Alex Jones who said that the tragedy at Sandy Hook Connecticut never happened.

Fortunately, in this case, Alex Jones got his due and maybe next time, if he still has a job, he will think twice before purporting such preposterous statements.

So much for our shock and awe introduction, but let’s tone it down a bit and concentrate on one particular conspiracy theory.  For this one which they call the moon landing hoax, we are going to take this accusation and rip it apart, one by one. Not because we have to, but by showing how ridiculous these theories are, hopefully, it will have a domino effect on those that continue to fall prey to these preposterous speculations.

“I Am Telling You! The Moon Landing Was a Hoax”

So they say that Neil Armstrong never set foot on the moon and that it was all staged in an unknown location on Earth, with cameras, and props located somewhere in the United States. They go as far as saying that Disney staged it in Hollywood.

Time to Debunk!

The Waving Flag

Astronaut on the Moon with flag
Photo by NASA on Unsplash

It is said that when the US flag was put on the moon, the photographs and videos show that it appears to be moving. The moon has no air or wind, so some say that the landing must be on a Hollywood set and is completely fake.

There are currently six flags on the moon, from each successful moon landing. The flags are made out of nylon and are held up by four interlocking aluminum poles. These poles were designed by many engineers which resulted in a kit named The Lunar Flag Assembly

Different soil and other aspects such as radiation from the sun needed to be researched in order to send up a functioning flag and flagpole. The original flag appears to be moving, because the top horizontal pole that holds up the top of the flag out, was not extended all the way by the astronauts. This is why the flag looks rippled in photos and gives the illusion of movement in the NASA video. The flag only moves when it is acted upon and touched by the astronauts (inertia) or when a rocket blast blows by it. 

Needless to say, if this was on a Hollywood set, it would be inside, so where is the wind? In addition, if this was such an elaborate hoax, does anyone think they would be that stupid to overlook such an obvious ‘fault’?

Two Shadows

In some photographs, there appears to be two shadows from the astronauts or the Lunar Module. This is easily explained. 

Since the Earth is much larger than the moon, it casts a much greater reflection of light from the Sun. In doing so, another shadow appears, which is a direct result of Earth Light (with respect to moonlight, but that light is much dimmer). The angle of the shadows is directly related to the time of day, the same as they would appear here on Earth; hence, two shadows appear.

Moon Dust 

The way the dust flies from the Apollo 16 footage from NASA shows how the rover kicks up the dust. The dust forms a rooster tail, because of the low gravity environment and no atmosphere on the moon.

This tail is a product of the lunar environment. Dust on the Earth cannot fly the way it does on the moon. This was confirmed by a study done by two scientists at the Laboratory for Atmospheric and Space Physics (LASP) at the University of Colorado in Boulder. Apollo 16 footage was turned into a mathematical formula that proved that if the dust on Earth was kicked up from a rover it would land very differently. 

Enormous Effort

To discredit them further, one should consider the enormous work that would need to go into this, even if it was just a Hollywood set.  A whole film crew would be required to make this ‘movie’, but this would not just be any movie, it would be a film designed to fool the entire world, including scientists, astronomers, electrical and mechanical engineers to name a few, and if it was run by the government, a certain amount of politicians as well that would probably work its way up the ladder.

This does not include the renting of the studio and all the bureaucracy that would go with it. In all, there would be no less than a few hundred people hired to fake this event. For so much equipment and so many people involved, why has no one come to speak out?

Yes We Were Really There

Besides the hundreds of people who would have been involved in the ‘Hollywood’ planning, the opposite is also true, as 400,000 Nasa employees were hired to accomplish this amazing task some 50 years ago. They even brought back moon rocks!

It is also an insult to the amazing astronauts who have sacrificed their lives for our country, NASA, and the people of the world who live here.

The Apollo 11 crew: Armstrong, Michael Collins, and Buzz Aldrin.
The Apollo 11 crew

There is such indisputable proof that we landed on the moon that it is almost impossible to say it was fake.  

This author had the pleasure of meeting Buzz Aldren to discuss his amazing journey and looking back to 1969 when I was a young boy, I can vividly remember staying up with my father to watch this incredible feat of engineering and determination. It was real to us then and it is real to us now! 

 

EV FAQs and Figures

Note: This article is about fully electric vehicles. Not hybrids. 

How Much Do EVs Cost?  

Electrical vehicles can run from $30,000 on the low end to over $100,000 on the high end, with Tesla being the major seller of EVs, with 1,917,450 vehicles sold since it was first manufactured. Elon Musk who owns Tesla brought in a revenue of $53.8 billion for the year 2021. Aside from Tesler, other manufacturers that make electric vehicles are BMW, Nissan, Chevrolet, Ford, Volkswagen and Kia. 

What are the Advantages of Owning an Electric Car?

Of course, the main reason for owning an EV is the savings you get by not having to gas up your car. Additionally, EVs don’t have a composition engine so there are fewer parts to become defective during your ownership. EVs are said to help with the environment and climate as well and they run very quietly.

What are the Disadvantages of Owning an Electric Car?

The initial expense of purchasing an EV is what keeps many away who would otherwise buy one. Then there is the cost of having a 220-240 volt connection installed into your electrical home circuit box, which can run from $600 to $1000. 

If you live in an apartment, you may run into an additional issue if the building or development you are in does not have an EV station available, but more and more buildings are having EVs installed, as well as shopping malls, public garages, and of course, many car dealerships.

Do EVs Need Oil Changes?

Conventional cars with gas engines need maintenance. The oil is used to keep the pistons running smoothly in the chamber of the engine.  An oil change is usually done every six months or 3,000 miles. Since there is no gas engine that runs the vehicle, no oil change is needed.

How Long Does It Take to Charge an EV?

That depends upon the charger you are using. Currently, there are two types available. A Level-1 charger that connects to any 110-volt outlet. This can take over six hours to fully charge an EV’s battery.

Then there is a Level-2 charger. Charging of this type usually takes about three-four hours to reach full charge.

In the near future, Level-3, called high-speed chargers, will cut this charging time in half or more.

Does Fully Charging Mean It Charges Up to 100%?

No, all EV battery manufacturers agree that these batteries should not be charged to 100% because it will cause degradation of the battery in the long run. Charging to 80% is the recommended charging level and is usually set as the default for most EVs.

You can override this setting if you are planning a long trip but it is not recommended to keep it at the 100% charge level continuously.

So when we say we are fully charging our EV, it means that we have charged up to the 80% mark.

How Many Miles Can I Get on a Full Charge?

Most EVs in the lower cost range get about 230 miles on a full charge. Some of the higher ones, such as the Tesla Model S can get up to 394 miles when the batter is at full capacity. On the other hand, a Kia Niro, a great EV with excellent reviews will get about 253 miles on a full charge. And the 2023 Chevy Silverado will have a 200 kWh battery that can take you a good 400 miles before recharging.

Can I Go on Long Rides With My EV?

Yes, but it is highly suggested that you plan your trip with charging along the way as your main priority. Check the highway’s rest areas to see if they have charging stations along the way, but be aware that if you do have to charge your EV during your trip, you may have to stay a while, possibly a good three to four hours. 

Of course, you don’t have to fully charge your car. If you could just add another 100-150 miles, that would cut down the time spent waiting. 

Either way, plan ahead so that you can find places to go while the car is charging. Some of these locations may have a restaurant where you have a long dinner or some towns may have charging ports on the street with many stores you can traverse through.

Then there is the hotel. Call ahead to find out if they have EV portals and if not, where is the closest one.

In the near future, more and more charging stations will be added along the highways and private locations such as housing and hotels. As part of his Build Back Better plan, President Biden has allocated $5 billion to increase the US infrastructure with over 500,000 new EV portals, and that doesn’t include the additions by private enterprises.

How Much Does It Cost to Charge an EV from Your House? 

Your electric utility can give you the best answer for this, but in general, it can cost between 14 cents to 20 cents per kilowatt hour (kWh).

So for example, if your EV has a 65 kWh battery, to fully charge it would cost 65 * .20, which is $13. That would mean that you would be spending $13 to go about 253 miles. That’s not too shabby! 

How Much Does It Cost to Charge an EV Outside of Your House?

Electric Vehicle being charged in a garage
Photo by Michael Fousert on Unsplash

The cost to charge your EV depends upon a number of factors, but in general, expect to pay between $20 – $30 for a full charge, which is much better than a conventional gas car expense, since you can add a good 200 – 300 miles back to the battery. Try getting 200 miles for $30 on a conventional car!

Plan Ahead!

If you are going to purchase an EV, plan out your expenses first against what it would cost for a gas car. Then plan out what you plan to do with the car. Will you be using it for local driving or going to work every day or is your main purpose to go on long trips? At this current time, we would recommend that you purchase your EV for local driving or work. Whatever you choose, enjoy your ride!

What Components Make Up EV Batteries?

Photo iStock, Credit: Golden Sikorka

E‍V Battery Overview

In our previous article, we discussed the advantages of owning an electric vehicle. Now, let’s delve further into the component of an EV – its battery.

Electric vehicle batteries consist of several subcomponents that work together to store and discharge electricity. These individual sections are also known as cell components or cell materials. The parts combine to form the complete battery and each has its own unique properties and function.

When considering the various types of electric car batteries, it’s important. Knowing how they function can help you make a more informed decision when purchasing a new electric car, hybrid, or extended-range electric vehicle (EREV) battery.

What are the Components of EV Batteries?

Before we review these components, we need to make sure we understand what an electrode is.

An electrode is a conductor which is a negatively charged (anode) or a positively charged (cathode) material. You can read more about electrodes here.

The different elements of an electric car battery include the following:

  • Anode – The anode is the negative electrode of the battery. It’s made from a metallic oxide material, such as nickel oxide or iron oxide. Anodes are highly porous, allowing for the movement of electrons.
  • Cathode – The cathode is the positive electrode of the battery. It’s made from graphite, a porous material with high electrical conductivity.
  • Separator – The separator is a thin, porous material that sits between the anode and the cathode. Its purpose is to keep the electrodes from touching each other. This is important to prevent overheating, which could result in the battery catching fire.
  • Electrolyte – The electrolyte is a liquid that serves as a conductor of an electric charge. The electrolyte helps move electrons from the anode to the cathode.
  • Container – The container or housing holds all of the components of the battery in place. It’s made from a corrosion-resistant material, such as stainless steel.
  • Cooling System – The cooling system ensures that the battery does not overheat. This can happen if the battery is overcharged and the temperature of the battery rises.

Battery Cells

Illustration of a battery cell
Photo: Wikimedia CC

The most important component of the battery is the cell, which is often made from lithium-ion or lead-acid materials. The cell is composed of active materials, electrolytes, and electrodes that are used to store and discharge electricity. The electrode is a conductor that helps to move electrons from one electrode to the other.

The most common electrodes used to make the anode and cathode are lithium and lead. Batteries can be composed of one cell or many cells connected together. Single-cell batteries are the most common type of electric car battery. Multicell batteries are used in larger-scale storage, such as solar systems or large-scale energy storage systems.

Electronic Parts

The electronic parts of an EV battery include the battery management system (BMS), the charge controller, and the voltage regulator. The BMS is basically an electric circuit that’s used to monitor the health of the battery by measuring voltage levels, charging/discharging rates, and temperature.

The BMS can also help to prevent overcharging and over-discharging of the battery. The charge controller is used to charge the battery. It helps to balance the amount of energy used to charge the battery and the amount of energy generated from the grid or solar panel.

The charge controller also measures the amount of current flowing into and out of the battery during charging. The voltage regulator is used to balance the voltage levels of the battery during charging and discharging.

Lead-Acid Batteries

Lead-acid batteries are the oldest type of battery used in electric cars. They are very cheap to produce and are easy to maintain. However, they are not as efficient as other battery types. They also contain toxic materials, such as sulfuric acid.

These types of batteries are typically used in large-scale grid energy storage systems, such as in a commercial or industrial setting. Lead-acid batteries come in both flooded and sealed types, with the flooded type being the most common. Flooded lead-acid batteries are filled with a liquid electrolyte. They are commonly used in electric vehicle systems.

Lithium-Ion Batteries

Lithium is very popular because of the ease with which it can release its electron, which makes it ideal for the electrons to flow between the anode and cathode.

Lithium-ion batteries are very efficient, have a long lifespan, and are capable of being fully charged in less than one hour. They are less expensive than nickel-metal hydride batteries and are used in a wide range of consumer electronics. Currently, hybrid electric vehicles, plug-in hybrid electric vehicles, and electric vehicles used lithium-ion batteries.

Where Do the Materials that Make Up a lithium-ion Battery Come From?

Generally speaking, there are five minerals that are considered essential for Li-ion batteries:

The locations where these materials are mined can originate in many different parts of the world, with China being the major exporter of graphite, which is the most important mineral that comprises the anode for these batteries

Nickel-Metal Hydride Batteries

Nickel-metal hydride batteries are also used in electric cars in both hybrid vehicles and electric vehicles. They are cheaper than lithium-ion batteries and are easier to recycle. 

Conclusion

There are many different types of electric car batteries, each with its own unique properties and functions. When considering the various types of batteries, it’s important to understand what makes up these different battery types.

Understanding how they function can help you make a more informed decision when purchasing a new electric car battery or an extended-range electric vehicle battery.

When looking for new batteries, make sure to understand their warranties and how they are manufactured to ensure you get the best product possible. 

Quantum Computing: The Origin and Its Applications Explained

Illustration of quantum light rays
Quantum computing. Close up of optical CPU process light signal.  Photo: iStock

You have definitely heard the word computing and you might have also heard the term ‘quantum’. However, it is unlikely that you have heard both of these words together.

The term ‘Quantum Computing’ hasn’t gotten the much-needed traction in the tech world as yet and those that have traversed through this subject might find it a bit confusing, to say the least. But there are experts who strongly believe that quantum computing is not just the future, but also the future of humanity, as we move ahead of the binary computer bit and venture into the world of computing located down at the subatomic level.

If you don’t have a clue what we are talking about, you are not alone. Stay with us through this article where we will discuss quantum computing in great detail—what it is—how it will change the tech world and its practical implications (both for better or worse).

But before we usher in the discussion of this potential life-changing advancement, it is necessary to discuss the platform on which its foundation is based i.e. Quantum theory. 

What is Quantum?

Illustration of the Atom
Particles of the atom: protons, electrons, and neutrons. Nucleus. Photo: iStock

Also known as quanta, in simple terms, it represents the minimum amount of energy that can be used within any physical interaction.

Using examples of particle interaction within the atom, a quantum of light would be a photon, and a quantum of electricity would be an electron. There can be no activity smaller than when these particles have an interaction.

The Potential Enabler of Quantum Computing 

The industrial revolution of the 20th century was one of the greatest milestones of modern history. From the invention of the automobile to industrial steel, elevators, and aircraft, it gave birth to a plethora of things that now define our civilization and will continue to shape the history of our future. 

Enter the 21st century and we are watching a transition from the tangible to the intangible (virtual) world; notably, computer technology, its hardware, software, and the world wide web.

Among the many incredible things that are ensuing during this technological revolution is the colossal development in physics, specifically quantum theory. We will try to keep the explanation of quantum theory as simple as possible in order to make this an interesting and informative article. 

Modern Physics

It is important to understand that the field of physics is divided into two definite branches: classical and modern. The former branch was actually established during the period of the Renaissance and continued to progress after that. Classical physics is majorly erected on the ideas put forward by Galileo and Newton. The principles are primarily focused on macroscopic (visible to the naked eye) and the solid nature of the world around us.  

Conversely, modern physics is about analyzing matter and energy at microscopic levels. The subject lies heavy on electromagnetism, the wave nature of light and matter, and the theory of duality. It is interesting to note that all these motifs of modern physics come from quantum theory.

While we are at it, it is important to clarify that quantum theory doesn’t just refer to one idea or hypothesis. It is actually a set of a number of principles. We will discuss them in a simple and brief manner and remain focused on the provisions that are relevant to quantum computing. 

    • The work of physicists Max Plank and Albert Einstein in the earliest of the 20th century theorized that energy can exist in discrete units or ‘quanta’. The hypothesis contradicts the principle of classical physics which states that energy can only exist in a continuous wave spectrum.
    • In the following years, Louis de Broglie extended the theory by suggesting that at microscopic (atomic and subatomic) levels, there is not much difference between matter particles and energy and both of them can act as either particles or waves as per the given condition. 
    • Lastly, Heisenberg proposed the theory of uncertainty, which entails that the complementary values of a subatomic particle can’t be simultaneously measured to give accurate values. 

Neil Bohr’s Interpretation of Quantum Theory: The Primal Basis of Quantum Computing

 

Illustration of a quantum computer
Image by Pete Linforth from Pixabay

During the time period, when the stipulations of quantum theory were extensively discussed among top physicists, Neil Bohr came up with an important interpretation of the theory. He suggested that the properties or the reality of any quantum system (an environment governed by wave-particle duality) can’t be determined or specified until they are particularly found out. 

This assertion led to the development of the Principle of Superposition, which in simple words, suggests that any quantum system exists in all its possible states at the same time until one goes on to find out the exact state.

The infamous Schrodinger’s Cat thought experiment is an easy way to understand this concept. The experiment entails that a cat enclosed in a box (which is supposed as a quantum system) with poison would be considered both dead and alive simultaneously until the box is opened and the cat is observed.

Use of Superposition to Develop Computer Algorithms 

Now, this is the point where the theory actually demonstrates its potential to be the basis of a new computer algorithm. In order to understand the quantum-based algorithm, it is essential to understand how contemporary/conventional computing systems work. 

Whether it’s a handheld gadget or a supercomputer working in the server room of Google, at the core of it, every computing device works on the binary language. In conventional computing systems, every bit of information can exist in one of either two states: 0 or 1 (hence ‘binary’). 

On the other hand, when we talk about quantum algorithms, they are actually inspired by the idea that any particle-wave system can exist in multiple states at any given time (Principle of Superposition).

This means when data is stored in a quantum system, it can be stored in more than two states. This supposition makes quantum bits (also referred to as ‘Qubits’) more powerful and expensive than conventional computing bits.

Standard Binary Computing Vs. Quantum Computing 

Seamless pattern with abstract binary code, digital matrix background
4 rows of 8 bits = 4 rows of bytes. Photo: iStock

The fact that a quantum bit can exist in multiple states gives quantum computing an uncontested edge over conventional binary computing. With the help of a simple example, we will try to demonstrate how superior quantum computing could be in comparison to its classical counterpart. 

For example, picture a cylindrical rod, and each end of the rod is a bit, which is either a  1 or 0. That’s it! When one side is a 1, then the other side must be a 0. There is no in-between here. 

On the other hand, the quantum bit exists in every possible state simultaneously. This means every point on the surface of the cylindrical rod denotes the quantum bit. 

The above explanation exhibits in a really simple manner that quantum bits can hold an unprecedented amount of information and hence the computing governed by this type of algorithm can exceed or super-exceed the processing of any classical computing machine. 

Apart from storing more information than classical computers, quantum computing can also implement the principle of entanglement.  In simple words, this principle will enable every quantum bit to be processed separately even without getting drifted away from each other. This feature will also enhance the processing capability of a quantum computer manifold. 

Beneficial Uses of Quantum Computing

The supreme processing capabilities of quantum computing make them an ideal machine to carry out many tasks where conventional computers lag behind.

Science and Life Sciences 

The study of complex atomic and molecular structures and reactions is no mean task. A lot of computing capacity is required to simulate such processes.

For instance, the complete simulation of a molecule as simple as hydrogen is not possible with the available conventional computing technology. So, quantum computing can play a significant role in understanding many of the concealed facts of nature and more particularly of life. Many significant chemical, physical and biological research works stalled for years can take off after the development of quantum computers. 

Artificial Intelligence and Machine Learning 

Artificial Intelligence Illustration AI
Image by Tumisu from Pixabay

Even though scientists have made significant inroads in the area of machine learning and AI with the existing computing resources, quantum computing can help in making the progress that we have always aspired for i.e. to make a machine as intelligent as human cognition. Machine learning feeds on big data. The processing of humongous databases goes into the development of any system based on machine learning. 

With the fast processing of quantum computing, even the usual AI will become more streamlined. In addition, the unrestrained computing power of quantum devices will revamp the development of artificial intelligence.

Improvement of General Optimization Procedures 

In today’s bustling life, we feel the need for optimization more than ever—whether it’s personal or commercial dealings. An individual trying to find the best commute for his day-to-day destinations or a financial entity trying to come up with different plans for its every unique customer, a good optimization can only be done when more variables are involved. 

With the addition of more variables, the number of permutations and combinations also goes up and the amount of data to be processed increases exponentially. Optimization of a financial plan might need the processing of several petabytes. Implementation of such extensive optimization in everyday activities can only be achieved with the processing powered by quantum computers.

Other Side of the Coin: The Dangers Involved with Quantum Computing 

One should not be surprised by this heading. We have seen it all through the course of history how the advent of any new technology, intended for the benefit of humankind, is followed by its misuse. And there is no exception for quantum computing. Adding insult to injury, the unrestrained processing power that can be harnessed by a quantum computer can make its exploitation more deadly. It’s important to mention here that the researchers working in the domain are well aware of the unwanted repercussions of quantum computing. 

Quantum Computing Puts Data Encryption Practices in a Great Danger 

Digitization of our everyday activities has shifted nearly every valuable piece of information into the digital form of data. From nuclear codes to personal banking information, everything now exists in the form of digitized data. For that matter, data is now considered a precious commodity. 

And as we know every precious commodity is vulnerable to vandalism, breaches, and thefts. So, in order to address this data vulnerability, computer scientists have developed encryption modules that are used to lock down the data in order to give it only authorized access. 

The encryption of data can only be neutralized with the help of a decryption key designed by the developers and stored with them. Any unauthorized party can’t get around the encryption without a technique called brute force cracking. But it is important to mention here that brute force might only work to crack simple passwords and basic encryption consisting of only a few bits. 

Let’s try to understand this with the help of numbers. 

As per the calculations done by the researchers, using a supercomputer, it could take more than a billion, billion years to crack data that is protected by what is called a 128-bit  encryption key.  In order to put things into perspective, our universe is just 13.75 billion years old. So, it is impossible for a standard 128-bit key to get cracked by the brute force algorithm, using the conventional binary coding system which has only two possible states..

But when we replace this two-state bit of computing with a quantum bit of unlimited existing states, the tables surely get turned.

The 128-bit Key that is so formidable against the brute force of classical binary supercomputers will fall flat when quantum computing is used to carry out the brute-force algorithm. No operating quantum computing machine exists as of today, but experts have estimated that a quantum supercomputer will be able to crack 128-bit encryption keys within 100 seconds. Compare that to the billon-billion years it would take a binary computer to crack the code!

Aftermath 

The aftermath of such a scenario won’t be less than any technological dystopia. Data encryption becoming ineffective will expose everything to the shenanigans of criminal elements. To understand just a fraction of this devastation, imagine that every person on the earth linked to the banking system loses access to his/her account. The mere imagination of such a situation can send chills down the spine.

Apart from that, the neutralization of data encryption can lead to cyber warfare between nation-states. Here also, rogue elements will easily be able to capitalize on the situation. A global outbreak of war in a world with the existing eight nuclear powers can end up with a dreadful outcome. All things considered, the manifestation of quantum computing can bring along many irretrievable repercussions. 

Preparation to Protect Against the Nefarious Use of Quantum Computing 

Google and IBM have successfully carried out quantum computing in a controlled environment. So, to think that quantum computers are a distant reality won’t be deemed an insightful judgment. For that matter, businesses should start preparing against the abuse of quantum computing. There is no point in waiting for formal rules and protocols to be issued. Experts working in the domain of digital security and cryptography recommend some measures to protect business data in the future from any exploitation of the quantum era. 

Conclusion 

How technology has progressed in the last few decades is clearly indicative of the fact that quantum computing is the reality of the future. So, the arrival of quantum computers is not the question of ‘if’ – it’s the question of ‘when’.

Quantum computing with all its benefits for the development of life sciences, the financial sector, and AI poses a great threat to the existing encryption system, which is central for the protection of any type of confidential data. The proper approach for any nation and business is to accept this unwanted aspect of quantum mechanics as a technological hazard and start preparing against it with the help of experts. 

With that said, it is also a blessing when used proactively for the benefit of humankind and we look forward to a better lifestyle for each of us when quantum computing becomes a reality.

Howard Fensterman Minerals