Sign up for our newsletter
Technology / Cybersecurity

Intelligent orchestration basics: A conversation with Meera Rao of Synopsys

Meera Rao, senior director of product management in the Synopsys Software Integrity Group discusses the basics of intelligent orchestration.

Intelligent orchestration (IO) is not a new concept – elements of it have been around for decades. But a much more comprehensive form of it is now becoming mainstream in a DevOps world where the speed of development and deployment is increasing by orders of magnitude.

Meera Rao, senior director of product management in the Synopsys Software Integrity Group. (Photo courtesy of Synopsys)

The new version of IO is an automated ‘heart and brain’ that calls for the right security testing at the right time, instead of overwhelming developers with so many security notifications that they start treating it as unwelcome background ‘noise’.

Meera Rao, senior director of product management in the Synopsys Software Integrity Group, has more than 20 years of experience in software development, more recently focusing on DevOps and CI/CD. She is also leading IO development at Synopsys.

Here she discusses some IO basics.

Is the primary value of IO that it helps effective security testing keep up with the pace of DevOps?

It’s that and a lot more. Over the past 12 years, we’ve seen development and operations teams across all verticals adopting DevOps practices. Automation is a key part of that in the SDLC. Testing, deployment and infrastructure are all automated.

Those teams are also putting increased effort into application security. But application security tools and all the security activities we perform are not well suited for integration into the DevOps pipeline as is. Introducing security into their DevOps or CI/CD pipelines creates friction. It’s slowing the speed of development and making developers so frustrated that they are reducing the use of these tools. What we hear is that they get too many false positives. Or they’ll say, ‘my pipeline itself takes ten minutes, but if I integrate security tools then it takes 40 minutes or more’.

And security teams face even greater challenges. One is that they have to retrofit all the tools they have into the DevOps pipeline. Another is how to balance automated versus manual activities. We talk about this all the time – that 50% of your vulnerabilities are bugs and 50% are flaws.

The automated activities won’t find any of the flaws, so you need manual activities. They could include code review, penetration testing, threat detection, threat modelling, and risk analysis.

The security team needs to maintain governance, compliance, and audit requirements, but the DevOps team is fighting them because of the friction, noise and everything else.

Those problems brought us to intelligent orchestration.

Can you define IO and explain how it works?

It is intelligent, risk-based and adaptive. It is a CI/CD pipeline optimised for the speed of development teams while making sure to maintain governance, compliance, regulatory and any other policies within your organisation.

IO creates a separate pipeline for security testing tools that runs parallel to the development pipeline. We provide a simple API so you can connect the two pipelines. You can decide whether you want to run it synchronously or asynchronously.

That means you can run your tests without slowing your development pipeline. It also reduces the burden on developers because within the IO pipeline you can configure the rules for the type of application you have, the technology and the framework to make sure you are performing the right analysis.

A simple example I use is: you made some changes in the JavaScript file for a font. Do I need to run all the activities – static analysis, dynamic analysis, and software composition analysis? A big no.

But if you make a major change to an encryption API or your authentication and authorisation API, then we know we need to run static analysis and perform a manual code review to see if the changes were implemented properly.

IO looks at the code change significance, the risk profile of the application, and the policies to be considered and decides whether or not we can skip certain security activities and allow the team to push through to production.

That gives teams the flexibility to adapt the solution to their development workflow and toolchains. Building those pipelines separately from your development pipeline allows developers the flexibility to adapt their workflows to the toolchains and helps them embrace this change.

The best thing about IO is that it also provides continuous feedback the way you want it – with a notification on Slack, on Teams, Jira tickets or whatever platform the team uses. And it doesn’t have to be for every vulnerability the tool finds – and trust me, tools find a lot. One example is that we can configure it to let development teams know only when it finds critical vulnerabilities and cross-site scripting. Nothing else. Because we don’t want the developers to be notified for hundreds of irrelevant issues.

Then there is risk management. All organisations have policies, maybe like a limit of seven days to fix a critical vulnerability or two weeks to fix one ranked high. The IO pipeline can talk to your Jira and say, ‘Hey, do you have any critical vulnerabilities that have crossed the threshold?  If yes, then what is your policy? Do you want to pause the pipeline? Do you want me to notify someone on slack or email?’ That’s intelligent orchestration.

You have used the phrase ‘policy as code’ as part of IO. Can you explain what that means and how it differs from the way things have been in the past?

One example of the need for policy as code is risk management. All organisations have a policy written somewhere that says, ‘If this is a critical application, externally facing, then every 90 days, no matter what changes the application has gone through, you have to do a manual penetration test’. Or that, ‘With every major release you have to do a manual code review’.

In many organisations, that policy is enforced by a person. Every time something needs to go to production, this person comes into a meeting and says, ‘Hey we have this policy. You need to do X’, and everybody else is saying, ‘What? You are telling us now? Because we are going to production tomorrow’.

So it sets off a big scramble. Who are we going to get to do a penetration test? Who is going to do a manual code review within the next four days?

That shows the need for policy as code. How does it work? You can tell us that for a certain application, every 90 days there needs to be a penetration test, so a team needs to be notified after 80 days. We can do that with the code. Whatever the policy is, the program will know when to trigger that policy. All of that can be brought into the IO pipeline.

Who writes the policy code?

The customer tells us what they want and then we write it. Most organisations have some internal technology where they store all of these policies. So a client gives us those policies and then Synopsys helps the security team. All the policies are brought in as code, and the orchestration enforces them.

Does it take special training to implement IO?

Once we deploy intelligent orchestration with your organisation, then the goal is to train whoever it is – the security team or the DevOps team – on how to configure policies as code.

Once IO is up and running, we do the training. It’s not very complicated. We have detailed run books for them on what to do if they have to bring a new tool into the pipeline. If they want to add notifications it tells them how to do that. There are a lot of activities that happen within the IO pipeline because the workflow itself will tell you what you need to do if you find something critical.

It also provides insights, such as who is the developer, who is checking in bad code day in and day out? Or, Meera checked in code today, what did she check-in? All of that information – the analytics, the metrics, is available even for auditing teams. We make sure to give them a run book that provides everything – the pipeline itself, policy as code, application risk insight, the workflow.

We also show how to extend it. You might have built the pipeline using all Synopsys tools, but maybe tomorrow you want to use a new tool for a new language – how do you extend it? We do training on all those aspects before we say it’s up and running. We train whomever they identify. In some organisations it is the software security group, in some it is a development team with security champions, and then in some there is also the operations team because in some cases IO runs as a Docker container. There are a lot of other tools that are configured as containers, so the operations team in some organisations maintains that.

Given that some organisations have hundreds or even thousands of projects, does IO need to be configured or written differently for every single project?

The key here is to look at different languages and technologies that an organisation uses. We build one pipeline, and then you can iterate from that pipeline. For example, if you have Java and Maven and there is one pipeline built for that technology and all the dependencies, then if you have ten applications that use those languages, you can inherit from that pipeline.

Also, all of your applications can inherit from that pipeline and deviate where needed. It’s very extensible, very scalable and very adaptable. Suppose I build a pipeline for Java and Maven but the development team says they want to take a detour and do certain other activities? There are minor changes you can make and inherit a different version.

What team runs IO, or is it automated and running in the background?

It can be the development, operations or security team. Especially with many organisations adopting DevSecOps, the line is blurred on who owns these pipelines.

If for some reason the pipeline breaks or the job fails, it will send notifications and let the respective teams know that either the tool is not working or the code is not compiling. There is constant feedback from the pipeline when something goes wrong, but it won’t bother you when something is successful. There is the right balance of automation and manual intervention when needed.

You’ve said that ‘integration’ is an order of magnitude easier than your existing pipeline. Can you explain that? Is it an example of IO?

It’s a pipeline that is running, and then as soon as I have the security tools configured you just need to call one API, if you choose. It is literally one line of code that says call that API. And you still have access to your same defect tracking, your same Jira, and to your metrics dashboard. It should still have access to whatever communication channel you are using, that can send you notifications. And yes, it is an example of IO. The goal in 2021 is to make IO follow the low-code/no-code movement.

Does IO only work with certain tools?

It works with all the Synopsys tools, so you can choose the tools that you want to use, for static, dynamic, and interactive analysis, software composition analysis for open source. But if there are also commercial tools that the client uses, we will make it work with them. It also works with all open-source tools, so if you have OWASP ZAP, SpotBugs or OWASP Dependency Check, it works with those as well. IO is truly tool agnostic.