View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. Cybersecurity
October 31, 2019

DevOps and Software Security – Are We Stuck in a Rut?

"Security should be embedded within the existing code build process through direct plug-ins and integration into the tools that developers are using every day for their pipelines"

By CBR Staff Writer

Agile software development has been with us for nearly two decades since the original Manifesto was published. Software development and IT teams all strive for better software that responds to customer needs, broadly in line with the principles of Agile. However, there are still problems that exist around the processes and politics of software, writes Marco Rottigni, Chief Technical Security Officer EMEA, Qualys.

DevOps can help here, with teams collaborating on how to get software out faster and more efficiently. Yet for IT security teams, the rise of DevOps has led to problems with managing software security and risk too. For DevOps and security teams that want to improve their approaches, how can they avoid being “stuck in the middle” and instead stick to the right processes in the future?

Software Security: Building Better Processes Across Teams

One of the biggest issues for IT security teams is getting involved early enough in the development process. For many, security is something that gets applied once the applications have been built and are moving into production.

However, this is an old fashioned approach that is held over from the days when development took place in waterfall phases and applications were held behind strong perimeter security implementations.

software security Marco Rottigni, Chief Technical Security Officer EMEA, Qualys.

Marco Rottigni, Chief Technical Security Officer EMEA, Qualys.

Today, almost all software will include some elements of cloud, API integration or third party code. It has become easier to mix software components to create new services rather than develop from scratch. Indeed, any team that tries to implement their own cryptography or security rather than using off-the-shelf products will create massive problems for themselves over time. Combining best-in-class services, open source components and internal code can deliver better results faster.

However, the first issue in this approach is around visibility – with so many parts involved in each application, keeping each one up to date and secure is a Sisyphean task that never ends. For those using containers to run microservices-based applications, this can be even harder. As an example, containers can be designed to exist for as long as there is demand for the service, and then be turned off and ‘destroyed’ once those demand levels drop. While the application instance is running, the components will exist. It’s at this point that they are vulnerable.

Containers are pulled together from repositories that store the images until they are needed. These images can be developed internally or used from public libraries; either way, they have to be updated and kept current. If this is not done regularly, then a supposedly “new” container will be created with any faults included.

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester

For any cloud-based application, getting accurate information on what is running at any point in time should be a necessary step. For IT security teams, this data should provide them with insight into what the real risks around any service are while developers can use this data to get a real handle on their application instances for performance discussions.

Tracking Responsibility…

The second area where this information can be essential is around tracking responsibility for those assets over time. When applications run in the cloud, they will be on another company’s infrastructure – that organisation may provide everything to run the service or let developers set up and run their own instances on top of the base cloud infrastructure.

When security gets involved with DevOps teams – either through ad hoc collaboration or more formal DevSecOps processes – it’s essential that security does not come across as disturbing the software development flow. Instead, it should be embedded within the existing code build process through direct plug-ins and integration into the tools that developers are using every day for their pipelines. This helps developers see security as directly benefiting their code efficiency, rather than stopping the process or acting as a blocker.

At the same time, the cloud shared responsibility model means that developer and security teams need to work together around who will keep assets up to date. It’s not important who does this – what is important is that it does get done.

Planning Ahead Around Collaboration

Looking ahead around DevOps, developers will continue to take on more responsibility for the whole process around building and running software over time. For security teams, getting involved earlier in the process should help integrate security tools like software vulnerability scanning or container management. This does not mean telling software teams exactly what to do – instead, it should help developer teams to prioritise their work and be aware of issues before they hit production instances.

Providing more visibility into potential security problems early can include scanning for common faults and outdated software in images through to enforcing best practices on web applications. By offering more guidance into issues earlier in the development process, these can be fixed before they get into later testing stages or into wider distribution. This also makes the process for making fixes more about collaboration and prioritisation, rather than arguments between different teams with different goals.

For security teams looking to get more involved in DevOps, concentrating on security for its own sake can be counter-productive. Instead, supplying developers with more insight into their applications will ensure that everyone works to the same goals.

Concentrating on visibility around application components can help everyone see where work is needed, while overlaying insight into responsibilities and priorities puts resources into the right locations at the right times. This advice on what matters most – which is different for every company, based on their unique IT history and technology choices, rather than being the same for everyone – means that everyone can stick to the right path ahead, rather than getting stuck into the wrong projects.

Read this: 700 Companies Signed Up for this Free IT Asset Inventory Tool on Day 1

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.