Blog

Zero Day

In recent years, the world of espionage has changed so much even James Bond has had to adapt. Anthony takes us deep inside a world most of us know almost nothing about — cyber espionage — to give us a detailed and dramatic account of the darker side of the internet.

Zero Day

*click*

“2009-06-07, 14:25 – Mission Log: I’m in!”

You’ve passed the first hurdle, effortlessly, thanks to a forged master key stolen from a locksmith with lax security. After years of training and preparation, your work finally begins.

You walk around the facility, calmly, systematically, drawing a map and surveying each room and noting who works in it, meticulously itemising every piece of hardware and software installed on every computer, effortlessly bypassing the security measures designed to block your access. Then you find it, your Primary Target: several Windows PCs with Siemens Step 7 SCADA Control System installed. Carefully, silently, you take copies of the custom-designed programs that are installed on the Programmable Logic Controllers (PLCs) that drive the machinery of this facility, and bundle them up, ready to be exfiltrated to a team of fellow operatives back at home anxiously awaiting your data.

If you catch an infosec specialist in a weaker moment, they’ll often confide that we’re losing the battle to keep our IT systems secure.

“2009-06-07, 14:56 – Mission Log: Preliminary survey & payload attached. More to come.”

While you’re at it, you take pictures of people, rooms, doors, signs, equipment — copies of documents, photos, diaries and address books on every computer and cell-phone you can touch — eavesdrop and record conversations and phone calls, and peer over the shoulders of staff as they enter login credentials to various systems. They might come in handy later. You do all of this with almost complete impunity, because no one can see you — you are virtually invisible, thanks to a special gadget cooked up by the boffins in the lab back home, bestowed upon you in the best 007 tradition.

You collect all of your reconnaissance and stolen data, place it into a tidy little box wrapped in birthday gift wrapping paper, and drop it into the mail trolley, addressed from the facility’s Chief Of Operations, to his great aunt who is apparently holidaying in Malaysia. No one notices anything wrong, and your package is sent.

“2009-06-10 – Mission Log: Happy Birthday, I hope you enjoy your present!”

This is the job you were born to do. The best of the best recruited straight out of university after six years of a multi-degree PhD and with a very promising career ahead of you, you heed your government’s call to serve your country — you are, after all, a patriot. What’s more, this job wasn’t advertised as a want-ad in the paper — they came to you! Who could resist such an ego trip? Aside from your extensive joint NSA and CIA covert field operative training, you also have extensive formal, informal, classified, and proprietary education in nuclear science, computer science, mathematics and cryptography. You speak the language and know the customs of your target country. Most of the time you work in one of the hundreds of nondescript, unidentified government buildings that sprung up following the 9/11 terrorist attacks. But now you are in Iran, where you are one of the most educated and capable spies ever produced, on a mission of a lifetime.

For a while, you slink back into the shadows, calling upon the stealth and patience drilled into you from years of training, merely checking the mail trolley daily to intercept the reply from your controllers.

“2009-07-15 – Mission Log: New Orders”

You wake with a jolt. You have new orders, new training and capabilities, and a new weapon. You begin immediately. On every computer you identified last month as having the Siemens Step 7 software installed, you overwrite the site’s custom written programs with modified versions, meticulously crafted so that no one will notice its new intent. Now, you wait for the engineers at the Natanz Nuclear Enrichment Facility to do routine updates to the PLCs that control the thousands of motors driving each of the uranium enrichment centrifuges spinning their deadly uranium cargo into ever purer form — for purposes benign, or deadly, it is not known for sure — where your new program with its malicious intent goes to work. A little cocky, you sit back and relax, watching mayhem erupt around you.

“2010-01-23 – Mission Log: All going to plan. 945 centrifuges damaged”

Your modified PLC program is a masterpiece of destruction wrapped in subterfuge, driving the centrifuges at shudderingly high and low speed swings at random intervals weeks apart, ultimately shattering many of them irreparably. For the Natanz engineers, however, everything looked normal, all centrifuges appeared to be spinning at their assigned 1064Hz, because the PLCs were faking their monitoring feedback, until the centrifuges began to disintegrate. Worse still, when they inspected the program inside each PLC for signs of error or even sabotage, they found nothing awry, as disaster continued to unfold around them. The morale and confidence of the Natanz nuclear enrichment facility engineers plummets to unrecognisable lows, bewildered at how it could all be going so wrong, and feeding self-doubt that they’re just not smart enough to compete with the West. A sly, satisfied grin crosses your face.

Meanwhile, in the halls of the Iranian government and its nuclear industry, all political hell is breaking loose — denials, misleading and deliberately misdirected confirmations, talking heads, and then rolling heads, including that of Iran’s Atomic Energy Organisation, Gholam Reza Aghazadeh.

Unbeknownst to you, a mistake was made by your controllers, and some other operatives — your colleagues — are caught in the act!

Suddenly a white hot spotlight is on you and your colleagues. A thorough investigation of unprecedented multi-national collaboration is conducted over many months, seeking to unmask you, your whereabouts, activities, and origins. Rumour and speculation abound throughout the political, security and technical industries involved. Nervously, your controllers order you to continue your mission, until on 29 November 2010 Iran’s President Mahmoud Ahmadinejad admits that their nuclear program had indeed been infiltrated by ‘Western terrorists’ who damaged state property, although the full extent of the damage was relatively limited, he assured his citizens.

On that same day, two nuclear scientists, Majid Shahriari a quantum physicist, and Fereydoon Abbasi a high ranking official in the Iranian Ministry Of Defence, are targeted in separate near-simultaneous attacks in Tehran by assassins on motorbikes who attached bombs to their cars — the former killed, the latter seriously injured. You hang your head in shame. Clearly your team had failed to execute the mission as successfully as your controllers had hoped, and your government — or more likely its partner in this mission, Israel — resorted to the blunt instrument of car bombs to finish the job you’d trained for years to perform.

“2010-11-30 – Mission Log: New Orders – WE’RE PULLING YOU OUT NOW!”

The next day, your mission is over. You systematically erase all evidence of your existence at the facility, and then, as if by magic, your controllers extract you from Natanz in the blink of an eye. Unfortunately some operatives weren’t so lucky, now captured and mercilessly interrogated for clues as to what you’ve done and how you all remained invisible for so long. The game is up, the secret out. Your existence as the world’s most advanced covert intelligence field operatives plotting to destroy foreign nuclear infrastructure is public knowledge. You return home, tail between your legs, but still ready to begin preparation for your next mission, using the trove of intelligence you’ve gathered in the past 18 months.

It’s a story with the hallmarks of a Tom Clancy cold-war-era spy novel, mixed with elements of The Matrix. In 2011 and 2012 as details progressively emerged, the world was stunned to discover that not only was this real, it’d been going on for years, undetected, even under the nose of a multi-billion dollar commercially motivated security industry whose stated mission is to detect exactly this kind of infiltration.

In this dramatisation, you’re an amalgam of Duqu, Stuxnet and Flame, three of the most advanced (to our knowledge) computer viruses ever conceived, orders of magnitude more complex and devious than the garden variety malware that troubles the ordinary lives of humble citizens across the globe. The sole purpose of these viruses is to spy on and sabotage the real physical infrastructure of a specific foreign government, and no-one else, without landing a single soldier on foreign soil or firing a single missile. Such a story was hitherto the work of spy-fi thriller novels. Now cyber warfare – not just espionage, but sabotage – is real, and works.

What we know so far is not complete. Those in the know – current and former staff of government and intelligence agencies in the USA and Israel – have spoken only on condition of anonymity, and only to the point where many critical details are withheld, as their disclosure would surely compromise current and future covert operations. Almost all of what we’ve learned about the technology behind this new reality has come from many months of painstaking work on each of these viruses by the big anti-virus companies like Symantec, Kaspersky and F-Secure, several universities, and relatively unknown infosec specialists, all working to reverse-engineer the captured viruses and unlock their secrets, their intent, and possibly their origin.

Faced with Iran commencing a full-speed development of its nuclear program in the early 2000s, and having lost any popular credibility with which to make new claims that another foreign government was developing nuclear weapons that the USA should go and do something about, President George W. Bush authorised the development of a cyber warfare program, code named Olympic Games. At first expectations were low, but by the time Barack Obama was poised to step into the Oval Office, Bush strongly urged Obama to persevere with the program. Obama did more than persevere, he expanded its scope and maintained a close overview of its progress and direction.

To understand how any of this remarkable ability to infiltrate computer systems considered to be ‘secure’ — and do so with such stealth — is possible, we must first understand a little about the internet, its origins, and the layers of complexity that have been cobbled together over more than twenty years to try and make it ‘secure’. We must also understand and accept how far along we are in the journey of computer science — that is, not very far at all. Even the word ‘secure’ is often misrepresented, as if it were a binary quality like the digital electronics that underpins all our technology. Unfortunately just as in meatspace (aka the ‘real world’ of flesh and bone, opposite of cyberspace), security is anything but black or white, off or on, absent or present, but rather has become just another business exercise in risk management, balancing the estimated risk of something going wrong versus the cost to avoid it.

‘The Internet’ was a lab experiment born of an idealistic vision of free and fast communication between computers — and the people using them — both near and far. For many years its development and use was the sole preserve of academics (and the US military), and there was an inherent and reasonable assumption of trust between peers. There were very few ‘bad actors’ on the embryonic Internet, and those that were had motives that were almost always simply about ego. Thus the internet’s underlying fabric (TCP/IP, and a rapidly developing suite of other protocols and services) offered little to no inherent security provisions. Only subsequently have additional layers of security been added, particularly once the internet became open to the general public in the early 90s. Even relatively robust forms of encryption were classified as restricted munitions by United States export policy until the late 90s, helping halt the progress of any widespread education, development, or use of cryptography until the following decade.

Concurrently, computer scientists have been endlessly busy developing myriad programming languages and associated tools for constructing ever more complicated programs and systems with less time and effort. This includes, for example, the Operating Systems (OSs), like Microsoft’s Windows, Apple’s Mac OS X and iOS, Unix and Linux in their dozens of flavours, and Google’s Android (and countless others in the dustbin of IT history) – upon which our programs run, providing an ever growing array of common facilities available to all programs. On top of these, web technologists are constantly developing additional layers, protocols, languages and frameworks we collectively refer to as ‘the online world’.

Perfection is rarely attained in this endeavour, indeed it’s an evolutionary process as inevitable and endless as biology, simultaneously affected by ‘artificial selection’ – that being the will and efforts of its developers, and ‘natural selection’ – the influences, restrictions and opportunities of the broader ecosystem of our built world, of economies, politics, laws and social structures.

Think back (if you’re old enough) to the earlier models of coin slot payment systems, such as those in music jukeboxes and pinballs of the 60s, and the vending machines and computer arcade game consoles of the 70s. Clever ‘hackers’ (teenagers with a lot of time on their hands, for the most part) worked out that you could fool them easily using cheap metal washers from dad’s garage, and for a short time a lot of cheap drinks and gaming was had, until the proprietors of those systems discovered that what was going on had become far too widespread. They took steps to make the coin slot mechanisms more complicated, for example, to be much more sensitive to the weight and size of the coin, and to reject any that weren’t right. Then there was the old coin-on-the-end-of-a-piece-of-string trick! Those same hackers worked out that if you dropped in a coin to which you’d attached a thread of cotton wrapped around a small hole drilled through the coin — which barely affected its weight — and then yank it back out after the mechanism has detected the valid coin, voila! You got your coin back and free stuff! Imagine the looks on vendor’s faces when they discovered they weren’t even getting cheap metal washers in their coin boxes!

This escalating game of cat-and-mouse continued for many years, until eventually coin slot mechanisms became fiendishly clever and complicated and it just wasn’t worth the effort of trying to circumvent them, even if it were technically possible, such were the revenues and profits at stake that vendors were willing to invest so much in each new generation of coin slot mechanism. The same evolutionary process played out on early paper note slot payment systems in the 90s.

This is a real-world example of a classic ‘Zero Day’ security exploit referred to every day in the infosec world — someone finds a vulnerability in the coin slot system and deliberately exploits it to get free stuff, until their illicit activities are noticed and steps taken to prevent them. ‘Zero days’ is the amount of time between a vulnerability being discovered and exploited (typically by hackers), and developers knowing about it and commencing efforts to fix it. Zero Day exploits stand in contrast to scenarios where non-malicious security researchers discover a vulnerability and privately disclose it to the developers, allowing them time to fix the vulnerability before it becomes public and maliciously exploited. This coin slot scenario is more than a mere metaphor for what’s been happening in the infosec landscape — it’s exactly how it happens, with only two differences.

The first is how far along the technological evolutionary tree we’re looking. One hundred years ago a coin slot payment mechanism would have been seen as a remarkable and devilish device, but now they’re rather passé. The programs, languages, OSs, and network protocols we’ve been developing for the last few decades have experienced exactly the same leap-frog evolution in security, as exploiters find weaknesses, and the system’s developers eventually learn that a vulnerability is being exploited and take steps to fix it — rinse and repeat. It’s why we’re constantly downloading updates for our OSs and applications, even when no new functionality is offered.

The second difference is that, unlike the perfectibility of the coin slot mechanism, there appears to be no such destiny in sight for our computer technology. This game has been spinning further into anarchy since its dawn in the 1970s, and if anything it’s increasing in pace. If you catch an infosec specialist in a weaker moment, they’ll often confide that we’re losing the battle to keep our IT systems secure.

There’s an endless number of ways IT systems can be exploited, and — shock! — simply guessing people’s crappy passwords is still a very effective tactic. But to illustrate one of the more technical ways, consider a typical login screen for an online service, accessible to anyone anywhere with nothing more than a web browser with an internet connection, presenting a familiar page prompting for your username (or email address) and password. When you, as a programmer, are writing a program module to implement this, it’s ‘human nature’ to expect people to type a reasonable number of characters, from a limited character set, like ‘yourname@yourdomain.com’, and ‘MyPassword1!’, right? But what if the user doesn’t do what you expect? What if they actually type in thousands of characters, either deliberately, or because the cat sat on the keyboard while they weren’t looking before they turned back and hit enter? Behind the scenes, in many programming languages, things can go terribly awry if a piece of program code is fed data of an unexpected quantity or form — so awry that the program, or even the entire computer, can crash! When this happens, it’s a huge red flag that the program isn’t sufficiently checking that input from the user falls within acceptable parameters and gracefully rejecting anything that doesn’t. The technicalities of how to turn such a vulnerability into a working exploit quickly enter propeller-head territory, but suffice to say that beneath the level of the program’s awareness, it’s possible to provide input that may be, rather than the product of a cat’s bum, the product of a malicious user that actually runs a program — a program that could do anything.

‘Zero Day Exploits’ like these are the holy grail among hackers and can be traded for several tens of thousands of dollars each, from discoverer to malicious attacker, in the ‘underground’ forums and IRC chat channels where hackers hang out. They can provide almost unfettered access inside a computer system that would otherwise be considered secure. By comparison, stolen credit card data is worth only a few dollars per card! This actually makes sense, when you consider that just one zero day exploit can get a hacker into a system, from which can be stolen the details of tens of thousands or even millions of credit cards, which can then be sold to someone else who commits credit card fraud. In this corner of the criminal hacker ecosystem, it’s all about the numbers and money — how much effort is needed for how big a return.

That’s why the Apple Mac has, until recent years, been considered ‘immune’ to most forms of malware (viruses, worms, etc) – not because of any mystical special sauce cooked up by Apple engineers, rather simply because Mac users were such a small proportion compared to Windows that it just wasn’t worth the while of hackers to find exploitable security holes in the Mac OS and then write the malicious virus programs for the Mac platform. That is changing.

The programmers behind Duqu, Stuxnet and Flame weren’t motivated by money or fame, and they didn’t use just one ‘zero day’ exploit to infiltrate their targets. Stuxnet alone used five — four to infiltrate specific aspects or layers of Windows PC security and one to get inside Siemens Step 7 PLC control systems — the first known case of a ‘zero day’ exploit in a SCADA industrial control system. In 2010 – the year Stuxnet was discovered – Symantec reports a total of only fourteen new zero day exploits were discovered among all known malware. This immediately raised eyebrows and led to speculation that Stuxnet came from a large or at least extremely competent and well-funded interest.

One of those Zero Day exploits related to how Windows displayed icons in Windows Explorer (which includes what we refer to as our ‘Desktop’). Due to this unforeseen flaw, all you had to do was view a list of the contents of a folder containing a file that Windows thought was an icon but was in fact a malicious program — Stuxnet — and in the blink of an eye the computer would be infected. At Natanz, Iran’s nuclear enrichment facility, the PCs and PLCs driving the plant equipment were kept on a physically separate LAN with no internet access, isolated from the general administration PCs and the internet. This is a very sensible security precaution referred to as an ‘air gap’, an expression from electronics where an air gap can serve as the insulation that separates two copper conductors, ensuring no power or signals pass between them. The problem is that no computer or network exists in a bubble. New PLC programs or updates to existing ones have to be transported into the isolated network from time to time, somehow.

In order to jump over Natanz’s air gap into its isolated LAN, Stuxnet’s masters first targeted the third-party service providers who program and service the Siemens PLC equipment. Those third-party engineers, knowing of Natanz’s air gap, would arrive onsite and copy their updated programs and associated files from their own Stuxnet-infected laptops onto a USB thumb-drive where Stuxnet came along with their files as a hidden icon file. In other words, immediately upon inserting their USB thumb-drive into any of Natanz’ isolated PCs and viewing its contents, Stuxnet would silently run and proceed with its malicious agenda. It is not known yet if this means of infection was done with any cooperation of the third-party engineers — that is, human saboteurs — or if they were merely unwitting accomplices.

Zero Day exploits weren’t the only tools used by these new cyber warriors. Cryptographic certificates are used extensively across the IT landscape. They can confirm the identity of a remote party (and optionally, your identity to them), encrypt the contents of data sent and received between you and a remote party thus thwarting anyone or anything in between spying on the transmission, and wrap the data in a digital envelope thereby ensuring the data hasn’t been tampered with enroute. Cryptographic certificates can be used for secure access to websites (signified by the reassuring https:// in the web-browser’s address bar, the ’s’ standing for ‘SSL encryption’), for email (which hardly anyone does) and assuring the integrity of programs downloaded from trusted sources. Most operating systems check such certificates before running a program or installing a device-driver.

Such crypto certificates are issued to an applicant by globally recognised Certificate Authorities that are trusted by ‘end-points’ (our web-browsers, email programs, computers, etc), but only after the applicant has satisfied the issuing authority that it is a genuine, legitimate entity whose identity can be established in meatspace via a challenge-response combination of phone calls, faxes, snail-mail, text messages, and so on.

Stuxnet and Flame used crypto certificates stolen from two manufacturers of computer chips, both of whose headquarters, coincidentally or not, is in the same industrial park in Taiwan. It isn’t know if they were stolen via traditional physical means of break-and-enter, or an ‘inside job’, or via a cyber hack (my money is on the latter). Ordinarily these organisations use their unique and secret crypto certificates to ‘sign’ the device-driver programs they write, which are then installed on countless millions of Windows PCs which have their chips incorporated somewhere inside. With these stolen certificates, Stuxnet and Flame could sign themselves and then masquerade as programs and device-drivers from these well known and trusted organisations, thus tripping no virtual alarms and arousing no suspicion at all.

It gets worse. You’re probably familiar with ‘Windows Update’, the means by which Microsoft automatically lets you download and install updates to Windows and other Microsoft applications and servers, like Office’s Word, Excel, Powerpoint etc, and Exchange, SQL and SharePoint. Microsoft implements what was previously considered a fairly robust system of cryptographic signing to ensure that the updates your computer downloads do actually come from, and are actually made by Microsoft, otherwise there would be only trivial difficulty for someone to pretend to be Microsoft and install anything they wanted into your computer. Flame not only exploited a flaw in this Windows Update cryptographic system (since fixed by Microsoft), but did it using a cryptographic attack technique that had never been seen before. The cryptographers behind Flame broke new ground in the world of cryptography. The number of people in the world who both understand and can practically implement cryptography at this level could probably all share a drink in a small bar, and most would know each other by name, but we often forget that in the shadows outside this small circle are an unknown number of staff employed by government security agencies all over the world. These shadow cryptographers enjoy the benefits of reading and learning from the open discussions and developments made by the public crypto community, but no one outside government ‘spook’ organisations ever gets their hands on the fruits of their secret labours.

These cyber weapons also create peer-to-peer networks between themselves within an infected LAN. They use this network for self-coordination, to reach PCs that may not have an internet connection configured even if they are on the internet-connected LAN, and to distribute updates to their programming, thus minimising the chance of dozens or hundreds of instances all downloading updates from the same source simultaneously and thus attracting unwanted attention. Speculation currently proposes there were several variations of Stuxnet, Duqu and Flame over their multi-year lifetime, perhaps to conduct the various stages of the mission.

For example, in order to infect the Natanz PLCs (which control and monitor all the plant equipment) with modified PLC code, the masters of Stuxnet first had to understand in intricate detail how all the Natanz equipment was laid out, identified, and connected to what type of machinery. PLCs are small toaster-sized boxes of electronics that can be programmed with relatively simple instructions to perform a specific limited function, and interface with physical infrastructure. For example, “PLC # 378 is connected to a pump and a valve controlling cooling fluids; PLC # 379 is connected to a motor driving a centrifuge at 60 000 rpm”. This kind of information is only partly contained in the PLC programs themselves, and contains no information about the broader facility. Information critical to this Natanz attack would also be in documentation made by humans for humans — floor plans, equipment definitions and layouts, device addresses and control codes, and so on. That is, in AutoCAD drawings, Word & Excel documents, PDF files, emails, and so on. Only with this documentation could they design a future update to Stuxnet to identify exactly its target and attack no others, and then meticulously modify the PLC programs currently operating at Natanz to do their malicious damage unnoticed. Once they perfected it in their lab, their updates were sent back via the Stuxnet network to be distributed amongst itself, and ultimately overwrite the legitimate PLC programs with the modified ones. It is not known whether these cyber weapons were able to exfiltrate all the needed documentation alone, or if they needed inside help from human spies.

Like a series of nested Russian Dolls, Duqu, Stuxnet and Flame are constructed in a modular and layered manner, within successive envelopes of encryption, decrypting and unpacking only the module(s) needed for a particular task on demand. When Stuxnet reached a PC with Siemens Step 7 installed, for example, it decrypted a module that would infect the Step 7 programs that would later be transferred into the PLCs and wreak their ultimately destructive intent. When Flame was ready to eavesdrop on Skype phone calls, a device-driver that wedged itself between Skype and the computer’s audio hardware (microphone and speakers) would be activated and record the conversation. When it saw a new cellphone with Bluetooth enabled come into wireless range, it would attempt to read the contents of its address book.

What’s more, they used a blend of techniques to remain undetected for an unprecedented number of years — ordinary malware is generally detected within hours to days. ‘Rootkit’ digs itself deep into the OS with total ‘root’ level privileges, and fools the operating system into hiding all files related to the malware. Where traditional rootkit-style malware saves several files to the hard drive, thus leaving open the possibility of detection, previously unknown rootkit techniques were used that kept most of the decrypted modules only in the computer’s RAM, in order to minimise the detectable footprint of files saved permanently onto the computer’s hard drive. They also adapted themselves to the specific traits of many different anti-virus softwares that would be encountered, to help minimise the probability of detection. However in the case of Flame, it literally hid in plain sight, using a collection of standard code libraries, SQlite database (for temporarily storing stolen data) and services that, together, could easily be dismissed as a legitimate business database program!

An extraordinary amount of effort went into keeping all this secret and undetectable. We’re told by the ‘unnamed sources’ that even the developers of these viruses were split into several isolated teams, such that very few actually knew what their final product or target was. Even if these cyber weapons were discovered, their developers clearly wanted it to be very difficult for anyone else to be able to reverse engineer them and learn all the secrets and tricks contained therein. To this day there remain a few parts of Stuxnet, by far the most investigated and reverse-engineered malware ever, whose purpose still isn’t understood because they can’t be decrypted. Investigations into Duqu and Flame continue.

In a recent admission by Mikko Hypponen, Chief Research Officer for F-Secure said ‘Flame was a failure for the antivirus industry. We really should have been able to do better. But we didn’t. We were out of our league, in our own game.

In parallel investigations by the IT security industry, the Command And Control (C&C) systems for these cyber weapons was also investigated. In order to maintain control of thousands of computers infected with Duqu, Stuxnet or Flame, and to do so at several arms’ length to help hide the identity and location of their masters, they operate networks of servers across the world, typically on servers rented from ordinary web hosting businesses, either paid for with stolen credit cards, or by exploiting vulnerabilities in the web-host’s own hosting platform or servers and installing itself as if it were that of a legitimate customer.

These C&C servers allow the thousands of infected computers to report back their status, and, crucially, upload their reconnaissance data, store the reconnaissance data in an encrypted form that only their masters could ever decrypt, and send back new commands to infected computers — individually, or in groups, or to all of them at once. Their masters downloaded all the reconnaissance data from the C&C servers twice per hour, and uploaded any new instructions, likely with automated scripts, and accessed their C&C servers via a string of third-party proxy servers around the globe, as well as using each C&C server to control other C&C servers, essentially creating a web of misdirection to maintain their anonymity. Once reconnaissance data had been successfully forwarded to its masters, the C&C servers would securely erase that data from its hard drive such that no recovery would be possible, and any server holding reconnaissance data for more than a set period of time would also wipe it from the hard-drive, guarding against the possibility that it had been ‘captured’ into an isolated network.

After dissecting these (Flame) C&C servers, it became apparent to researchers that they were designed to communicate with a range of infected computers that used up to four different communication protocols. However Flame used only one of them, and Duqu and Stuxnet used different C&C servers and protocols altogether. Further investigations strongly suspect that Flame and its protocol are probably the oldest of these three warriors, whose initial development dates back to 2006, though certainly updated frequently, and that at least one of the as-yet-unknown three is actually out there, somewhere. Clearly this is a work in progress.

When news of Flame’s discovery broke (in the public discussions amongst infosec specialists) within a few hours a command was sent to all instances of Flame to delete themselves. Given this ability, might they have also redirected the communications from other as-yet-undetected and unknown cyber warrior variants already out in the field, to use different C&C servers, thus preserving the secret of their existence?

However, Stuxnet, Duqu and then Flame were detected, eventually. Whilst there is some conjecture as to what really happened, the ‘secret sources’ suggest a mistake was made resulting in the detection of Stuxnet and the implicated identification of its masters. Stuxnet was never intended to escape ‘into the wild’ where it was first found, but when someone from or associated with Natanz took their Stuxnet-infected laptop back to the outside world (i.e. at home, with an internet connection) a later update to Stuxnet’s programming supposedly resulted in it failing to recognise that it was outside the confines of Natanz. It escaped and began to spread rapidly, infecting thousands PCs, even though it was programmed to do nothing to ‘non-target’ PCs.

Quite by chance Stuxnet infected one particular PC which, for reasons probably innocuous and unique to that PC, got stuck in an endless boot-up/crash loop, which was investigated by an almost unknown antivirus company, VirusBlokAda, in Belarus. They determined that this virus sample was using what appeared to be an unknown Zero Day exploit. The surface qualities of this ‘unknown sample’ were enough to garner at least some interest — simply using a new zero day exploit is enough.

At nearly half a megabyte, Stuxnet was also huge. In comparison to garden variety malware it was ten to fifty times larger. Only malware that included space-hogging images or fake phishing web pages are this large, but ‘unknown sample’ appeared to have none of that, rather it was a dense lump of code and commands. Despite that puzzle, many researchers were ready to move on. However, a few Symantec researchers persevered, spending at first days, then weeks, obsessively and painstakingly reverse engineering Stuxnet’s code, and becoming progressively awed by the complexity, quality and cunning they found locked inside, including an utterly unprecedented four Windows Zero Day exploits, and they still hadn’t unlocked Stuxnet’s ultimate payload. Thus was reignited the most concerted and lengthy investigation of any malware up to that time, involving several antivirus companies, universities, and one small organisation in Germany who have been crying wolf about the possible exploitation of industrial control systems for many years.

Once Stuxnet was detected and better understood, researchers observed similarities to other previously unidentified malware more recently infecting the computers of Iran’s Oil Ministry, which in some cases was wiping them completely clean and utterly unusable. These common code modules and other shared traits was the clear evidence linking Stuxnet and Flame as being from the same creators, or at least multiple teams collaborating.

Stuxnet was akin to the classic Hollywood trope of a secret government bioweapon virus escaping from the lab (or at least its intended target) and infecting untold thousands or millions of its innocent citizens, simultaneously proving its virulence and effectiveness, and providing an unsettling reminder that governments routinely do things most of us could (and would) never do ourselves.

While some argue that it is possible to produce truly secure computer systems and programs, they will also usually admit that the effort and cost to do so can be exorbitant, and the human talent needed is severely lacking. First consider that ‘security’ is just one relatively small aspect of designing and writing a program, an aspect that usually isn’t core to the purpose of the software, whether or not there are commercial imperatives driving its least-cost development. Security is a constantly moving target, where the techniques and vulnerabilities exploited today can become obsolete within years, or days, depending on how careful the perpetrators are in hiding their tracks. It’s a constant flux that our educational institutions can be too slow to keep up with, leaving generalist comp-sci tertiary graduates ill-prepared for the rough and tumble of the infosec landscape they first encounter. As we’ve also learned, ‘security’ comes from all the links in the chain, at all layers of the system, being resilient — where even if everyone at each of those levels were at the top of their game, it’s still not always enough. And let’s not forget a fundamental vulnerability — the ‘human factor’ — where ordinary computer users can be the weakest link in the IT security chain, where just one slip clicking a link or opening an email attachment is all the foothold malware needs.

Many countries across the ideological spectrum are now confirmed to be operating cyber espionage, and possibly cyber sabotage programs.

What compounds the challenge is that some of the techniques by which technology can be exploited can be extremely difficult to understand even after the fact of a security breach, much less be foreseen. Experienced programmers can sit in front of a block of program code — their own or someone else’s — known to have a major security flaw, and simply not see it. Such are the esoteric subtleties at play in the program code, the OS it runs upon, the communication protocols used, and even the underlying hardware, all of which conspire to produce this insecurity. Furthermore, programmers increasingly do their jobs ‘standing upon the shoulders of giants’, reusing program code that was written by their predecessors, or that of a ‘black box’ designed and coded by a third-party over which they have no insight or control.

It is for these reasons that self-taught ‘white hat hackers’ (hackers with benevolent, or at least non-malicious intent), proudly maintaining the IT industry’s ‘cowboy’ image, can still make the best programmers and specialist IT practitioners, particularly security specialists, despite gaps in their knowledge and a single-minded focus on narrow areas of technology. This is because they approach the task from the same perspective as the ‘black hat hackers’ (the ones with a decidedly malicious intent), as manifest in the ‘penetration testing’ sub-industry. Rather than reviewing code and second guessing what the documentation says a program should do — as distinct from what it might actually be doing — ’pen-testers’ cut to the chase and apply all of the known tools and tricks of black-hat hackers and undoubtedly a few of their own (at the invitation of their clients, of course) in an attempt to find security holes. They then report back their findings to help plug obvious holes and develop more secure solutions. Unfortunately their services don’t come cheap, and are too often dismissed by management who underestimate risk, mainly because they don’t fully grasp the intrinsic vulnerability of their IT systems until they’ve suffered an attack on themselves.

This is all to say our technology is as fortified or flawed as we choose to make it, notwithstanding the practical limits that keep genuinely secure systems perpetually just beyond our reach. Thus the dominant force in commercial software development is to ‘ship now, fix later’, and even then the fix often comes only after something goes wrong in public.

Be assured that this ‘cyber warfare’ is not a one-sided endeavour. Many countries across the ideological spectrum are now confirmed, or at least strongly suspected, to be operating cyber espionage, and possibly cyber sabotage programs. Geeks love acronyms, and here’s a new one: APT – Advanced Persistent Threat. Duqu, Stuxnet and Flame are three such threats, but they’re just the few about which we finally know a significant amount. Some infosec specialists sarcastically refer to the term ‘APT’ as code for ’attacks probably by the Chinese government or its proxies’. Clearly this is a one-eyed view of the APT landscape. Such attacks have not only infiltrated Google (the only multinational company, at the time, to publicly acknowledge such attacks even though they had no compulsion to do so, and whose response was to stop censoring the results of its Google Search product to Chinese citizens) but also dozens of major USA and European corporations in an attack referred to as Operation Aurora, understood to have been Chinese in origin whose purpose appears to be theft of intellectual property of all kinds, across many industries. Scott Borg, director and chief economist of the U.S. Cyber-Consequences Unit posits that China ‘is relying increasingly on large-scale information theft. This means that cyber attacks are now a basic part of China’s national development strategy.’ In the fast-paced world of software products, for example, why wait until your competitor has released a product before you take a look and try to emulate it — a process that can take years and millions of dollars — when you can steal the source code, give it a different face and then call it your own, to be sold to a massive national market who wouldn’t know that it was far more than a mere ‘rip off’ of a Western company’s product? Or worse still, gain an insider’s perspective on how to exploit the Westerner’s products, and thus infiltrate the networks of those who purchase and install them.

Security gate-keepers RSA have also been victims. Their two-factor authentication security product SecurID is used extensively not just by banks (e.g. the security tokens your bank issues you that are needed to login to their online banking), but governments, military and military contractors around the world. An APT that infiltrated RSA’s network in early 2011 literally stole the ‘keys to the kingdom’ of every customer of RSA’s SecurID product. Embarrassingly, all it took was a spear-phishing email landing in a staff member’s spam/junk folder which appeared to be from a fellow staffer, with the subject ’2011 Recruitment Plan’ and containing an Excel spreadsheet… which they opened. It’s one of the oldest ‘social engineering’ tricks in the book. From there it was game over for RSA, who were merely Stage 1 of this APTs ultimate goal. Stage 2 was to then use those keys as a critical part of infiltrating the networks of many large corporations, including military contractors like Lockheed Martin and Northrop Grumman, and exfiltrate unknown amounts of classified military data. Those same military contractors and other high value targets now admit they’ve been on the receiving end of highly orchestrated, well funded, and escalating APT ‘cyber weaponry’ for many years.

Earlier this year the Australian government, on the advice of its national security agencies, took the controversial step of barring Huawei, a Chinese manufacturer of a wide range of networking equipment, from bidding to supply infrastructure for the $40+ billon fibre-to-the-door National Broadband Network currently being rolled out. Huawei have long been thought to have close ties with China’s government. It is not publicly known whether it was simply concern that they would gain intimate knowledge of sensitive aspects of NBN infrastructure, or that their networking equipment is riddled with security vulnerabilities, thus allowing anyone to hijack their customer’s routers to act as spies or saboteurs and totally undermine the privacy and security of all who used the NBN (which means, contrary to some popular fear-mongering, Huawei wouldn’t need to bother with secret ‘back doors’ into their routers). The USA is taking a similarly hard line against Huawei.

Another form of attack that’s become the favourite of politically oppressive regimes is the ‘watering hole’ attack. Earning its name from the way lions lurk near watering holes in the dry savannahs until prey, desperate for a drink, can’t hold off any longer and makes a dash for it, such governments now target the websites likely to be frequented by their political dissidents, rather than the dissidents themselves. Why waste time trying to finding them, when you can just hack a few web-servers of organisations like Amnesty International and other human rights NGOs, which in turn infect the PCs of the dissidents they’re seeking, requiring little more than a few hackers for hire with a grab-bag of Zero Days at their disposal. The cyber-spies instantly and silently identify them and their location, and capture evidence of all the juicy, subversive, democratic and human rights activism in which their rogue citizens are engaged.

Some find it easy to dismiss ‘cyber warfare’ as simply a new manifestation of the familiar Cold War era, as the games of nations and governments, as though it had no real impact on Joe Average at home, who are after all not the direct targets of these powerful cyber weapons. Perhaps some of these people are the same ones who thought the internet is a fad. How do we forget the decades of nuclear arms race terror, the Korean War, the Suez Crisis, the Cuban Missile Crisis, the Vietnam War, Russia in Afghanistan, the many proxy wars, the Space Race and the folly of ‘Star Wars Defence’ — all products of the Cold War’s so-called stalemate, and the deadly impact they each had on ordinary citizens across the world, whether as attackers, defenders, allies, or uninvolved civilians? These deconstructed APTs like Duqu, Stuxnet, Flame and others are now in the public domain, and bits of them, or at least the techniques they pioneer, are already showing up in less concerning instances of malware. It’s quite easy for the cyber weapons, so intricately and expensively constructed by the big players, to fall into the hands of (non-government) criminals or terrorists of any stripe, far easier and cheaper than in the illegal arms trade conducted in meatspace.

We have entered an era when anytime something significant goes wrong with technology or infrastructure, the possibility of cyber attack is high on the list of possible causes investigated. When you can take remote control of an electricity grid, gas pipeline, or air traffic control system, you no longer need to hijack a plane and fly it into the target to effect great damage and death toll. Mutually Assured Destruction (MAD) through a broad nuclear exchange may not be the primary risk any more, but who, other than Western countries whose industry and infrastructure are the most reliant upon technology, has the most vulnerability and the most to lose? As these cyber weapons gain the capability to inflict real world destruction, at what point does the recipient consider that war has been declared upon them, given the potential anonymity of its perpetrators, or indeed the ability to carry out ‘false flag’ operations that plausibly implicate an innocent party? Could this escalating cyber-warfare be taken past a threshold that would appear to the citizens of the recipient — the vast majority of whom are ignorant of the ubiquity, power and subtleties of this new cyber war paradigm — as an unprovoked attack, upon which they popularly authorise their government to respond with traditional warfare? Considering the broader geopolitical landscape of resources conflict, is this potential deliberately pursued as a new form of propaganda?

We’re told Barack Obama was at pains to avoid ‘Olympic Games’ from being detected, as it would provide moral and political justification to its adversaries. The reality is many countries have been at it for many years already, it’s simply not been public knowledge until the last year or two. Welcome to a new chapter of our brave new world. Politically, the nation-state is just as important as ever, but geographically, a nation’s borders have taken the biggest step toward irrelevance since the advent of the intercontinental ballistic missile. The internet now provides our ‘spooks’ with a wonderful new battleground of capabilities and potential for insidiously cunning subterfuge, and ‘security’ is little more than a myth sold to gullible consumers. ◾