Why Software Shifted Left

Why Software Shifted Left

Not all software works. Clearly we have all used applications that crash too often, stop working the way they used to, get to the point where they can’t scale to our wider extended needs, or simply get compromised by some update or change that renders then non-functional.

But before that point, not all software works in its developmental programming stage. This core unfortunate and inconvenient truth has given rise to the expression ‘shift left’ development; also sometimes known as test-driven development. This is all about testing software early and often – and it is very typically discussed within the context of keeping software applications secure in the face of cyber attacks and all forms of malware etc.

What shift left really means

The official definition of shifting left, as it relates to security, is the process of implementing or using a tool earlier in the software development lifecycle to enable teams to build more secure applications before deployment. Given this contextualization then, we can now look at how building software has changed over the past couple of decades years and what developers are doing today in order to make our apps safer and more robust.

In her role as developer advocate at cloud security company Lacework, Kedasha Kerr says she has spent time talking to many engineers who worked throughout the segment of the PC revolution spanning the 1980s and 1990s into the 2000s. This process provided some invaluable insight into where we are now with software.

“I realized that programming [back then] at that time was the wild, wild west,” said Kerr. “Programmers were responsible for not only coding an application, but testing, deploying and project management. This is where the term full-stack engineer started to be used, which created a different type of work-role silo in teams, compared to what we see today with frontend and backend software engineers.”

Tumbling down the software waterfall

Kerr, who quite marvellously tweets as @itsthatladydev reminds us that this wild west programming period was a time when the ‘waterfall’ model of software was widely used i.e. developers would build all the code they could and then just tumble it over into production in an essentially linear sequential set of phases. Or in other words, downwards in one direction.

Because of the waterfall effect, it would sometimes take one to two years to deploy projects to production and when it was, security wasn’t front of mind.

“Because on-premises datacenters were widely used and personal data did not live on the cloud or across the Internet at the time, there was more focus on physical security – ensuring that data warehouses were only accessed by authorized individuals. If there was a security issue, engineers often wouldn’t know about it until it was published in a dedicated magazine or they heard their peers speak about it in a meeting,” clarified Kerr.

This all meant that when code was deployed to production, there often wasn’t a ‘live production’ environment (as we know it now with the immediacy and continuous continuity of the cloud) because ‘deploying’ to production meant physically mailing a CD and/or floppy disk to customers so they could update the software on their machine.

“This was a period when software was meant to run on a single machine – there was no such thing as a web application. If a company didn’t provide access to Microsoft Visual SourceSafe, version control meant having a folder on a hard drive that was passed around between engineers,” said Kerr.

For other engineers at the time, going to production was painful and nerve-wracking because there was a lot of copy/paste involved. Software would be released every six months and then go to production.

Kerr says that this all meant that programmers (and their supporting operations staff in roles such as Database Administrator – DBA and systems administrator – sysadmin) needed to take down the servers overnight and copy the source code from one directory to another… all while crossing their fingers and hoping that the entire system wouldn’t be taken down, while also hoping they had a reliable copy of the code to roll back to stored safely on a floppy disk.

Then… came Agile

“Because there was often no test environment, developers relied on peer reviews before shipping the code and hoped that it worked as intended. But in 2001, a group of programmers came together to create the Manifesto for Agile Software Development, changing the way that applications were built. The manifesto introduced 12 guiding principles around teamwork, leadership and customer satisfaction. The very agile Agile process made software deployment cycles significantly shorter and companies quickly adopted the practice to rapidly deliver solutions to customers,” explained Lacework’s Kerr.

Looking back at what played out throughout the initial embrace period when Agile was being popularized and adopted, Kerr points to the change of cadence that happened here. Where code used to get deployed on an annual basis (six months if you were lucky), we saw release cycles as short as two-weeks. The Internet age had arrived, the cloud was forming and things looked good. We hadn’t really stopped to worry enough about data control, cybersecurity and locking down the systems we were building, but that was okay because we would worry about later – obviously, it wasn’t okay, but let’s keep going.

“Today, when we consider how software is pushed to production now, we think of automated processes with Continuous Integration & Continuous Deployment (CI/CD) pipelines and built-in test suites. We have more specialized roles with dedicated specialists working in DevSecOps, product management, cloud architecture, frontend development and backend development – and so finally, a single programmer is no longer responsible for all stages of building software. Going to production is as simple as pushing a button, and thanks to version control systems such as Git, there is no longer a need for floppy disks and CD-ROMS to keep source code,” said Kerr.

While Agile processes make building software faster and more efficient with scrum, technologies like Jira (a proprietary issue tracking product developed by Atlassian that allows bug tracking and agile project management) and two-week sprints, Agile methodologies are often argued to neglect post-deployment security reviews and cloud misconfiguration checks.

The spectre of technical debt

Keer points out the implications of this and says that if vulnerabilities or misconfigurations are found before going to production, there is little time to address the concerns because another two-week sprint is about to begin – those vulnerabilities would be pushed into ‘technical debt’ (sections of code that ultimately need to be refactored and fixed because they fail to align perfectly with the functionality, safety and scalability requirements of the total software system being built). In her view, instead of sprinting to the finish line and continuously shipping new features, we need to take a step back to ensure that our code and our processes include guardrails against bad actors.

“Software engineering has evolved into a well-organized machine where quality code is the standard and testing is mandatory. However, in today’s environment, data lives in the cloud. This means, when building software, we must implement a security-first mindset – not physical security, but cybersecurity. We are no longer in the days of on-premises data warehouses – we live in a world where web applications are the standard and bad actors are hungry to gain access to the data that lives in the cloud,” reinforced Kerr.

Where all of this discussion brings us to is a point where we need to think about how we thinks. Instead of thinking about shifting left as a standalone corporate process, we can incorporate a security-first mindset into our daily workflow much like we do with testing – at each stage of development.

“Let’s ensure we incorporate the same patterns when it comes to application security. Having a security-first mindset helps us to build software that has stronger resilience against bad actors and allows us to feel more confident with the code that we’re shipping. This mindset shift will help us identify data access issues earlier in the build process, rather than an aftermath effect of not having the right permissions in place,” concluded Kerr.

Shift-left for businesspeople

This is an IT story, a software engineering story, a technical geek’s workflow process story and on many levels it is of course a software security and cyber-strategy story… but let’s just think wider for a moment.

A lot of the terms used here are now bleeding into business management and process engineering studies. Because we’re now talking about post-pandemic Agile agility, workflows that gravitate around scrum-based planning systems, this is (arguably) perfect theorizing for the management consultants of tomorrow to (god forbid) start to apply to every aspect of business.

As we now also embrace shift left itself as a prototyping precautionary-aware business test theory where we can simulate real world deployments with virtualized abstracted technologies, often using the digital twins we build in the Internet of Things (IoT) to represent not just physical objects, but processes, systems and entire cities, we can shift leftwards to a better place.

Thankfully, shift left is internationally language agnostic, meaning that people who speak human languages written right to left such as Arabic, Urdu, Hebrew and Farsi will always fully understand the concepts here because the computer command line starts on the left-hand side of the screen. Whichever side of the page/screen you start from, shift-left is right.