SDL – Security Development Lifecycle – Threat Modeling – DREAD

Many of us today are developing solutions using Microsoft 365, AWS, and other cloud ecosystems as our core platform and computing environment. We also often tie in Azure services, and reach out across the globe to wire-in free (or subscription based) APIs to pull in and leverage data from 3rd party information providers. As we’ve come to rely on other companies to build and maintain core parts of our software systems, I feel like the Security Development Lifecycle (SDL) has become more-and-more of an afterthought. We are, as I’ve observed it, starting to blindly trust these companies. There just seems to be complete void of conversation where there used to at least be some concern and discussion about risk of going to the cloud.

So, in this post, I just want to resurface an old concept so you can ask yourself as a software professional:

  • Did you even think about the risks or concerns of building a public-cloud based solution?
  • If not, shouldn’t you at least take a few minutes to write down your thoughts and consider how Microsoft 365, AWS, Azure, etc. potentially expose you to security threats and ignorance about the inner workings of your product?
  • In choosing these 3rd party platforms, do you even realistically have any ability to proactively or retroactively do anything gain insight, design defensively, or remediate a discovered vulnerability?

With that premise set, let’s look at a key part of Threat Modeling and Threat Analysis: DREAD. Let’s also consider some interesting questions that challenge the IT discipline’s often blindly pro-cloud stance.

D: Damage Potential

R: Reproducibility

E: Exploitibility

A: Affected Users

D: Discoverability

Damage potential: on a scale of 1-5 (low-to-high), what’s the damage to your organization if a flaw in your software is discovered and rooted? Thinking about cloud based solutions, and looking at your design more subjectively: did you REALLY make things “better-or-worse”, cheaper, easier to maintain, etc. by offloading a given component to a 3rd party? Are you accidentally revealing PII or protected company information? You know every one of these companies is logging every transaction and using that data for telemetry and their own “improvements”, right? Is that consent of data disclosure in itself exposing your organization to some damaging scenario?

Reproducibility: on a scale of 1-5 (low-to-high), how often can a potential flaw in your software be reproduced? By moving to a cloud based ecosystem, can you even assess this anymore? The flaws in your software aren’t the only flaws you need to worry about. Just understand that you’ve trusted Microsoft, Amazon, Apple, and “Insert Organization Here” to be a part of your team. Understand that your attack vector size and the interest the “bad guys” have in your platform has grown well beyond anything you can control. With everything constantly churning-and-changing beneath you (and your software), are you able to determine, and re-determine, how often an attacker can reproduce and exploit the flaw?

Exploitibility: on a scale of 1-5 (low-to-high), how difficult is it to root the flaw? Consider with this – does a user need to be logged in. Part of your thought process here should circle back to the newfound attack vector size and interest you’ve exposed yourself to by moving to a cloud based platform. Anyone can sign-up and, in some way, become a part of the same cloud computing ecosystem that you’re in. Moreover, you’re likely on a shared tenant. Yes…these providers have distributed, split-up, striped, and scattered your data all over the place, so being on a shared tenant isn’t a big deal, correct? Well, I trust that the big players like Microsoft, Amazon, Google, Adobe, and others have done this effectively. But make sure you consider the sophistication of other lesser known platform and service providers you’ve made part of your software.

Affected Users: I don’t think the classic definition (installed instances) applies in this discussion, so let me put a slight spin on this – on a scale of 1-5 (low-to-high) what is the percentage of employees in your organization that might be affected by a flaw in your software. More so in a cloud environment, be ethical and give this some thought: could you be exposing other organizations to some kind of damage if they share a tenant with you, or are on the same platform as you?

Discoverability: on a scale of 1-5 (low-to-high) what are the chances that your flaw is going to be discovered by an attacker. This is the one where you really need to consider if moving to a cloud provider is a risk. If you had an app sitting inside of your infrastructure, there is a very high likelihood that your environment is of less interest, exposure, and constant vulnerability testing by the bad elements out there when compared to those mentioned throughout this post.

To conclude – this isn’t a witch hunt. Microsoft 365 rocks, Azure is amazing, and from what I understand (never used it) AWS is one of the greater advances in cloud computing and development platforms. I’m not trying to imply that these technologies and platforms shouldn’t be the core of your design and architecture. The point of this post was to revisit an old friend – Threat Modeling and the use of DREAD to rate risk as part of your model. It was also to attack the default mental model of “cloud is better”, without truly considering the risks, that seems to have invaded the IT discipline. Just take some time and give these concepts some thought the next time you choose where and how to host your software.

Thanks for reading.


Categories: Azure, Business, Office 365 and O365, Software Development

Tags: , , , ,

Leave a Reply