When I first started in cybersecurity, computer viruses typically spread via floppy discs or executable files. The focus was on being able to detect these attacks and remediation was simply a case of deleting the executable files. Much harder was removing those that modified key parts of the operating system (typically the Partition or Boot Sector - code on the start of the hard disc that as its name suggests would be loaded at computer turn on and would define how the disc was logically setup and start the loading of the operating system.
Clean up of these attacks was initially done manually and required using low level tools to cut and paste back the genuine sectors that were typically hidden elsewhere on the disc as after the virus had loaded they were still required for the computer to startup normally.
Lesson learn - attack were slow and cleanups were manual
In 1995 there is a day I'll never forget, which was the day viruses moved from being in executable files, makes sense right as they are a software program that needs to be run, into files we consider to contain, data or information. The first was the Word Macro Virus concept, given its name as the virus seemed incomplete, it didn't have a payload, but simply by using what was becoming increasingly powerful capability, macros: small highlevel bits of code embedded into data files to complete simple automated tasks such as inserting the current date when a new file is created. Each new file created and existing file opened use template files, which become the hub of spreading. At the time we were a small handful of people doing tech support at Dr Solomon's and upon the launch of the Word Macro viruses concept launching immediately our phone lines were swamped, we pulled in everyone we could to handle the load.
Lesson learnt - (1) The key shift point was that we all share on a daily basis data files, and as such viruses moved from something that spread over months to now virus outbreaks that could spread in days, hours if not faster. (2) As attacks started to use data files to spread they also started to look to manipulate or modify the data as part of their objective, which is also known as the payload of the attack.
In 1999 the threat world took its next evolutionary step, which was to use the collaboration tools automatically that as humans we would use to share data files. MS-Outlook was one of the first targets with the word macro virus Melissa using macro code to grab the first 50 people in each victims address books to then send itself via an auto generated email to those people.
Part of the attacks success was down to the reality the emails being received would be from someone the victim knew as they were in their address book after all.
In 2000 we saw the "I love you" script virus, moving to a more agnostic and powerful language compared to macros now being leveraged by many data files.
This attack further enhanced the notion of social engineering as who didn't want a love letter from again a trusted source, an acquaintance in the victims systems contact list, it also employed a simple deception trick of using multiple extensions to change what the file looked to be. Educating staff on what not to click on is and always will be part of educating your staff but its made harder when the adversary confuses the victim as to what the things they are clicking on actually is!
Lessons learnt: Collaboration tools are still the most common methods for attacks to spread Educate your users on social engineering, but know the adversaries will do their best to trick users.
In 2001 we saw attacks start to really take the human out of the infection process. CodeRed was a worm, an attack that simply replicated its own code to other systems rather than infecting existing files. But it did it at pace as it rather than relying on a user to click on it it used a vulnerability in MSFT web services.
The best way I can describe a vulnerability is through an example. If you go to a cashpoint, there are keys 1-9 & 0 you can type, as such I could chose to withdraw £99 but the reality is banking notes in the UK only come as £5,£10,£20 or £50. As such it would be impossible to withdraw £99. Many vulnerabilities exist when the code doesn't know how to handle a situation that's unexpected. In this instance some code should in in the program to validate the amount being requested for withdrawal ends in a 5 or 0. In the software world typically a vulnerability allows the program to break out of the permissions it was allocated and instead run with default systems permissions. In this instance it would allow the virus code to be executed and interact will the whole computer system.
At the time CodeRed was considered to be the fastest spreading attack by using internet communications protocols to hunt for other vulnerable systems connected to the internet (typically your web services are internet facing)
However in 2003 SQL Slammer work hit and hard infecting 100,000's of systems in just minutes. Again it relied on a vulnerability to gain access to systems, but this time rather than using TCP which is the internet standard for communications that relies on two computers building a trusted connection (a digital handshake to each other) it used instead UDP which was does not require a digital handshake so the Slammer worm was simply sprayed across the internet. Its called SQL slammer as it used vulnerability in SQL which is a commonly used database.
Lessons learnt: (1) Software patching is key as part of today's cybersecurity to protect against vulnerabilities. You can also use security tools such as IPS (Intrusion prevention) and other behavioural tools to help either give more time to patch or even mitigate patching. (2) Also realise how fast attacks can occur and start to consider how long it would take you to recover, what is your resiliency strategy? (3) Make sure you clean up all the attack, one side effect of CodeRed was that it left a backdoor (ie a new unknown vulnerability to access compromised systems. Simply deleting the virus wouldn't remove the backdoor.
In 2001 Nimda changed again how attacks function. Till then most attacks typically functioned in a linear manor. Nimda changed that, it was like a swiss army knife, if could spread both by email, internal network shared drives, by using exploited to bet to web servers and of course any user that happened to then visit the web sites hosted from those servers. It also used the backdoor that CodeRed virus had added to systems that it had compromised.
Lessons learn: (1) Many attack now have multiple capabilities they use to gain access and also spread laterally. (2) You'll hear about the attack kill chain or now the MITRE ATT&CK model. Both of these are methods to map out and technically explain how each attack works. The latter can also be applied at a technology level to help detection and prevention tools automatically adapt to new attacks.
Pretty much as long as there has been viruses, malicious code and vulnerabilities their have been toolkit. These are collections of the coding techniques behind either a graphical or command line interface, that allowed potentially infinite variations to be created. Initially these were developed to simply make their own attacks more quickly, but in today's world this has become both a primary and secondary revenue stream for cybercriminals. Either selling these tools to distance themselves from the actual crime or after they have completed their own attacks selling the tools to get a second revenue stream. The advent of AI has pushed this concept even further AI can be used to both simplify the interface but also to create yet further new permutations of attacks. You'd be surprised some of today's toolkits offer upgraded and even technical support, all too often under the guise as being educational software.
Lesson learn: (1) With the advent of malware toolkits the cybersecurity industry could no longer rely on detecting known bad things (also known as pattern matching detection). As such the behavioral detection evolved to meet this demand, it's advantage was being able to detect large swathes of attacks by how they worked, the downside is that they have a higher chance of detecting genuine software as being bad by mistake (known as a false positive detection). (2) In critical parts of your digital business systems you may want to allow less automated responses to behavioural detections until you know the attack is genuine, were as lower business risk/value systems you may allow such behavioural detections to both be used more aggressively (if you like your canaries in the mine) and also allow automated response steps to be taken to slow the rate of an attack, as should a wrong automated decision be taken the business impact will be much less.
If there is one word that strikes fear in the cyber attack word today this I suspect would be it. As the name suggests ransomware looks to find valuable data, encrypt it using typically today a long randomly generated encryption key, and then blackmailing victims to pay money to regain their data back. Over the years ransomware has gone through a number of evolutions. It started by targeting specific applications and then specific industries and business that would likely have valuable data with a goal to get increasingly more targeted ransom payments. But over the years we have seen the attacks become savvy data scientists. Today's ransomware will often analyze data to better understand its value and allow the adversary to demand a more tailored ransom. This also led to the advent of double and triple ransomware extortions. Where the adversary may either threaten to resell on your data or post it on the web to shame the company, sometimes even doing both. We have also seen ransomware look at personal data and use that as a method to coerce staff to do their bidding. Over the years ransomware groups have become innovators in attack techniques, where as in the early days they would be buying attack tools on the dark markets to compromise their victims, today they are instead selling the compromise tools and often access to systems they have live compromised access to (backdoors). At the same time many ransomware attacks today will also target your back up systems, be they local or in the cloud. If they can gain access they will delete your recovery copies of data.
Lessons learn: (1) Ransomware is no longer just data encryption, many ransomware attacks are really blended attacks, so ensure you have taken all the steps required to recover from an often complex multi faceted attack.
(2) Ransomware can strike more than once, either as it is using multiple levels of extortion, but can also be that you get hit by the same attack multiple times. Whether you pay or recover yourself there are no guarantees they won't attack you again.
(3) Make sure you have a strong data backup process, which means knowing what data is critical to your business, where it is, who and what has access to it, and of course that there are copies of it that can't be deleted during an attack.
(4) Should you chose to pay the ransom it doesn't guarantee you get all the data back, or that the adversaries won't still use the data in other ways. If you are considering such a step call in experts to help you through the process.
As we have all put data online so have we seen regulations such as GDPR, POPIA, USA privacy laws just to give a couple of examples. Typically these focus on what data you can retain (their needs to be a genuine business purpose), how long your sensibility retain it for, how and why the data can be shared: you may here the terms data controller (gathered the data) and data processor (2nd party re-using the data), have a clear process for the data subject to be able to check the data is correct and be able to have the data deleted. And most importantly what you do should the data be compromised. ie. What steps you need to take, who/how/when you notify, etc. This can lead to third party audits validating that you had acceptable controls in place, and should this not be the case can lead to both large corporate and in some instances personal liabilities that can include fines and custodial sentencing.
There are also many other forms of regulation and legislation, some focused by industry, other by country and others by perceived risk. The NIS directive is one such example. This is a legislative requirement for all EU countries to apply into their own national laws, and is currently being updated to its second iteration to be applied by EU member countries. It doesn't focus on end users data, instead if focus on the digital process that underpin all of the essential services in daily living, such as food production, water, waste management, energy, healthcare, finance, emergency services as well as digital services such as service providers and major online retail services, etc... In these organisations there are now ever more stringent requirements for having state of the art cyber security and incident response capabilities, as when these services go down from a cyber attack. There are other requirements in the NiS directive around intelligence sharing and how to work with national Computer emergency response teams (known as CERTs).
The simple reality is the digital world is becoming ever more regulated, which in my experience is starting to push organisations to move data, even in the cloud back into their own key countries of business in an attempt to help simplify the legislative requirements they have.
Lessons learn: (1) Data is the lifeblood of most businesses, and in many instances that will include personal data. In most instances you will have to adhere to the data handling laws for which the data subject was a resident, so don't assume you only have to abide to your own countries laws, and definitely get legal guidance. Getting this wrong can be a very costly mistake! (2) Make sure you know which bits of regulation and legislation your business is accountable to. The digital world is dynamic so ensure as you bring in new digital processes they are all mapped into your compliance frameworks, you should be getting updated from your security team on your states of compliance and risks of non-compliance both during a potential breach and also day to day. (3) You can expect the degree of auditing on this to grow both business to business to manage their own risk but also from both legislative enforcement agencies and cyber insurers who lean on such compliance frameworks to assess your risk posture and capabilities to comply. Whilst such assessments historically were paper based, they are now increasingly also leveraging some digital validation through technology tools.
Back in 2020 the WEF released the first of their cyber security future risks reports for which I had the honour to be on the steering committee. Even back then AI was already emerging as a topic we needed to consider how to enable securly. However the internet of everything was another key topic. Be it industry control systems, smart offices (access controls, building climate controls, vending machines, etc) smart homes (heating systems, energy management, smart appliances tec), smart cars or smart bodies (be that watches tracking our health, embedded medical devices such as pacemakers, insulin pumps and so many more amazing technology breakthroughs, every aspect of our world is becoming IP addressable (this means it has an address that can be used to connect to it on the internet).
Then Covid sadly hit the world and what was considered segregated, our work systems, our home systems all became merged as they shared the same connection points to the internet and each other, and in reality we have got use to this hyper connected way of life. So why is this so important, put simply you can have a high risk system that is connected to the most inconsequential system that each have their own relevant risks and associated cyber security controls. You wouldn't like care if your TV is hacked, but when that becomes the conduit to your pacemaker, your work laptop, your business data and the connected business smart systems you I'm sure would. The key point here is they share the same digital network (probably your home wifi) but most don't have a real need to be connected to each other and so they should be segregated, only really the digital systems that really need to communicate with each other should sharing only the data that needs to be shared. This is the fundamentals of "Zero Trust" a concept that is being now broadly adopted. You could ask why aren't systems built this way the simple reality is that is easier to be open and ready (that allows any product to have bigger future revenue) and being blunt takes much less effort than actually verifying what degree of connection is needed for each system.
Lessons learn: (1) In my humble experience too many business networks are flat (i.e everything can talk to everything rather than being segmented to manage risk, before you even get into the internet of everything. As such step one start by understanding what are the few (no more than 5 or 6 max) key digital processes that make your business function, and then look at how they are segmented off to manage risk. This should be the start of your zero trust model. (2) The next reason to segment out if building the effective digital firebreak. ie. if you are being compromised by a cyber attack are their natural breakpoints, this could be physical locations, business functions or other gaps to limit the scale of an attack. (3) How do you manage the boundaries between business and personal lives and technologies. Having spent significant amounts on your cyber security the is nothing worse than finding a breach occurs via a personal device connection, or worse your systems vicariously allowed access to an employees insulin pump as they control it from app installed on their corporate device. (3) Not all risks are equal, not everything makes sense to segregate the costs outweighs the risk. I like to think of it in 3 layers, high risk: you really need to segment well, medium risk: that are the things that need to connect to the high risks and help the business function at the broader level and then low risk: have minimal business value or risks but the key thing is that they can't access the high or medium risk things.
Probably one of the most scrutinised areas today of any business, no business works in isolation. In 2021 you may have heard of two very significant attack campaigns in the media, Log4J and solarwinds, identified by my former colleagues at FireEye as having been developed by Russian Intelligence. The key aspect is that your business didn't buy each of these software programs directly they were embedded into other pieces of software they had purchased, to help deliver functionality. You could almost consider these bit of software as being like the inner layers of a russian nesting doll. Since these attacks companies have started to want to understand what is embedded in the software solutions they buy and use. In many instances because you buy the solution, you can't simply check and patch the systems as you then void the product warranty and support. Its not just software inhouse that can be the point of compromise but also your partners and physical supply chain. I have often seen the smallest supply chain partner have the same level of access into business systems as the most rusted strategic partner. Each of those partners has a differing view of cyber risk and investments to secure it. All too often breaches occur because of these differing perspective of risk aren't addressed.
Lesson learnt: (1) Make sure your security teams do know what embedded software is in your key business applications, this is often now referred to as an SBOM (Software Bill Of Materials). (2) If there is embedded software that you can't touch what security controls can be put around the software to ensure if its compromised it doesn't impact the broader business systems, Zero Trust can greatly help you here. (3) More and more regulations are focusing in on your supply chain, the data, software and processes your share both upstream and downstream, so having a well documented supply chain risk strategy is key.
Moving to the cloud has offered business the ability to scale on-demand, resiliency in hosting and the notion of agile development. That is unlike historically where updates an new releases would happen very periodically, typically a combination of updates bundled together, the dynamic ability of the cloud and virtual environments (the ability to mimic a physical server or capability in near real time on the cloud) enabled the idea of continuous updates and improvements. So what's not to like? At the simple level you see scope creep, ie un-intended things making into your live cloud environment. A very simple example of this is when software is being developed little thought goes into passwords being used, all to often "password" or "admin" get used because at that point in time its simply a test build, yet the test is successful and then the test build gets moved into the live build but it still has the same weak passwords in it that then leave the business very exposed to compromise. The second challenge we have already covered which is the supply chain, to run a business in the cloud you will likely being using at least 5 or more other processes, right from source code repositories needed, to the tools the dynamically build your environment to those that measure its performance and either add in or take away additional duplicates to manage the load on the application. The third aspect is time, security people have been use to having the time to assess, define the risk, work with IT teams to ensure the appropriate countermeasures are in place, now you have environments that anyone in the business can buy, build and execute in hours.
Lessons learnt: (1) You need to have a clearly defined policy on who can build and implement cloud based business processes and how cybersecurity is ingrained into the build cycles, known as CI/CD pipelines (continuous integration/continuous development). This should never stifle innovation but ensure it occurs in a structured way.
(2) With increasing regulation we are seeing far more focus on where in the cloud data is stored. When your business is using the cloud ensure they are factoring both the resiliency and also residency of where any data is being stored and used.
(3) Because of the complexity and pace, you may wish to look at taking security as a service to help you employ the right controls from those that really are experts in this space. Cloud security and cloud edge security (SASE - Secure Access Service Edge & SSE Secure service edge - the former include Software defined networking) allow the advantages of having experts manage complex layers of security with economies of scale as well as often the best technical knowhow, so freeing up your security teams to focus on what really needs to be done in house.
In the last few years LLM (large language models) have changed how the world understands and is using AI and the data it can process. Yet it's not as complex as you may think. At some point data gets tagged with attributes describing what the data is and then tokens are used to exchange the likely relationship between two bits of data. For example Take the word "Hello" there is a strong likely hood is could be followed up "with how are you" each of these words would have a token value to the next. At the end of the day this is mays probabilities with forms of data. Any AI geeks out there I apologize for the mass oversimplification. Now one of the key areas of focus today is something called prompt engineering, which means how is the question asked. The human language is very complex and nuanced so not easy for technology to mimic. But AI typically has guard rails, if you like rules defined to stop it doing socially unacceptable things (think of software with a moral compass). so if I asked it to build me a petrol bomb, the AI would say in some polite form it can't help you as that would be dangerous. Now if I tell the AI that flowers is a replace for petrol and pot is a replacement for bomb, then I ask the AI to help me build a flower pot the AI may well give you the information it can gather from its enabled data sources on how to build a petrol bomb. This is simple word replacement but there are many other techniques being discovered and we start to learn how to build robust AI models. Double negatives is another sample example.
As well as using your AI for business you should also consider how its being used for cyber security. Naturally the adversaries are using it to not only improve their toolkits capabilities, they are also using it to gather intelligence from the internet to be able to better craft more personalised social engineering attacks, and the language translation capabilities are now allowing attacks to read as well whether they are in English, Japanese, Russian or any other language. Cyber security supplies however are also getting on the AI bandwagon, typically in 3 ways. Firstly to translate the technical into layman's terms, effectively being a knowledge broker. Secondly augmenting human skills. For example allowing code to be written in human language and the AI translates it into the desired programic version. And finally skills replacement - using AI to do things better and faster than humans can, for example big data correlation such as done in your Security Operations Center (SOC) or look for new threats in completely different ways, as whilst AI will always have the biases of those that programed it it isn't limited by the human imagination in terms of creative analyse and reasoning.
Lessons Learnt: None of this means we should engage LLM and AI in general but you should consider the following. (1) How could someone subvert the AI you are developing, much like the CI/CD pipeline of Cloud agile development the process needs to have strong governance around it. LLM's also rely on complex supply chains which also required strong governance. LLM's are largely could based although agentic (agent based AI) typically now links these together to so some part of the processing locally. The key point here is again resiliency and data locality.
Cybersecurity Wisdom
Copyright © 2026 Cybersecurity Wisdom - All Rights Reserved.