Meet Jeff. Our AI threat modeling assistant

Our powerful AI Assistant saves you time and aids your diagram creation but keeps you in the threat modeling driver's seat.

Augment your threat models with our AI, Jeff.

Jeff is our powerful AI Assistant which aids you throughout your diagram creation. Jeff is able to take instructions from you to create a threat model, to save you time, while supporting your vision for creating a specific architecture. Human collaboration, and secure by design principles are two crucial elements for building secure applications. Our threat modeling tool enhances this, but now with added oomph: Thanks Jeff. 

The best thing about it? You remain in the driving seat. Jeff is there as and when required, and you retain full control of your threat model, with the ability to edit, export and archive as you see fit. #NiceOneJeff.

Key takeaways

We took our time developing Jeff, as we felt it was important to focus on creating a security library to allow companies to threat model output from AI first. We then decided to begin development on in-product capbiltiy to enhance the journey of threat modeling. Here’s why you should try Jeff out:

  • Jeff augments your existing efforts

  • You stay in control of the threat model

  • Jeff guides, supports, and saves you time - as and when you need it

  • New to threat modeling? Jeff gets you up to speed, and fast

  • Access Jeff for free in our Community Edition 


What are the benefits of Jeff for me as a customer?


Jeff aims to overcome a number of challenges with threat modeling, especially for those new to threat modeling.

  • Made easy with guidance - Firstly, Jeff guides you interactively and intuitively through the process of creating a diagram. You don’t have to draw anything yourself, but simply telling Jeff what it is you are threat modeling allows you to have a diagram created for you which you can refine and improve with Jeff or manually in IriusRisk after the project has been created.
  • Saves time and effort - If you have existing design artifacts, then Jeff can use them to create diagram. This means you don’t have to duplicate the representation by hand, and that saves you time and effort. Jeff can handle virtually any textual representation including:
    • A simple written out statement
    • Documentation
    • User stories
    • Source code
    • Meeting transcriptions
    • SBOMs
    • Probably other stuff as well
  • Learn from examples - Because Jeff does a lot of the heavy lifting for you, it is very easy to get started. This means you don’t have the challenge of starting with a blank canvas and having to work out where to even start. In this context, you can think of Jeff as creating bespoke templates based on the specific needs of the user.
  • Have fun - Ok, this might not be as important as the others from a productivity point of view, but actually, creating threat models with Jeff is pretty cool. This could add to the motivation for creating threat models, overcoming challenges with adoption by development teams etc.

What are the benefits of threat modeling with AI in general for me as a customer?


In general we think there are a number of interesting ways AI can assist with threat modeling, although ultimately the market will decide what is truly useful and what is hype. Initially our focus is on getting a threat model created in IriusRisk as quickly and as easily as possible, but there are other avenues to explore in future:

  • Enhancing the metadata in the threat models such as by adding tags or completing questionnaires (Q3 2024)
  • Exploring the threat model output in a guided way, including risk and countermeasure actions (Q4 2024)
  • Embed AI based threat modeling directly in the developer tooling (2024)
  • Using AI to enhance the rules engine and do holistic analysis across the entire threat model
  • Generate dashboards and reports etc dynamically
  • Use all of the threat modeling data inside an IriusRisk instance to proactively identify trends to create more effective threat models

Why do you keep changing OpenAI model versions? Our customers won’t be able to use Jeff if you do.


OpenAI released some GPT model changes late last year, then nothing over the new year, then a bunch of changes in the past few months. IriusRisk has updated Jeff to test these model changes as part of our ongoing effort to improve the speed and effectiveness of Jeff as a threat modeling assistant.

There are a few things we need to keep in mind when it comes to model changes.

    1. AI and LLMs are a fast moving and highly competitive space right now. OpenAI will continue to work as fast as possible on delivering improvements to its LLMs. This means releasing updated models as fast as they reasonably can. The flip side of this is supporting old models. OpenAI cannot support older model for very long due to the high cost they’d incur. So even if we wanted to, we wouldn’t be able to stay on older models forever.
    2. Another aspect of it is for IriusRisk. AI based threat model is also a fast moving and increasingly competitive space right now. We are having to play catch up with the likes of Secure Flag and threat modeler. This means we need to innovate at pace, but equally important we need to continue to improve at pace. If we have a customer who cannot upgrade to faster, cheaper, and better models quickly enough, that will impact our ability to innovate as a business. We cannot set the pace of our business against the pace of our slowest customer, or at least, we have to accept that we’ll fall behind if we do.
    3. Customers should be focused on governance of AI at a level above which specific model version is being used. Sure, they may have to approve the use of OpenAI. Possibly even between GPT-3.5 and GPT-4. But LLMs are a moving target and they are setting themselves up for a ton of work if their governance is going into further detail than that.
    4. In future we will be able to partition off our experiments with new OpenAI models from what customers are using in production, but not during the MVP. And again, subject to the constraints of when OpenAI deprecates old models. This also applies to our use of ChatGPT when deployed through an Azure service and not through OpenAI as a SaaS vendor. We’d still be subject to the same constraints. The only way to truly control the pace of model development would be for us to build and run our own LLM infrastructure from scratch, and that would be a massive and expensive undertaking right now.

If a customer registers for Jeff MVP, will every user in their tenant have access to Jeff, or is it only the admins within the org?


The Jeff MVP is accessed through Slack, so their Slack admins will decide which channels to make Jeff available in. Once the threat model is in IriusRisk the usual permissions model applies, the model is created by the Jeff user and if the Slack user’s email exists in IriusRisk, the user is also added to the threat model.

If the channel is private:

  • Only the persons in that channel can use Jeff (beware that Jeff permissions needs to be different to the case where the channel is public, but it is possible)

If the channel is public:

  • Only those that wrote "inviteme" and are in IR and Slack with the same email will complete the threat modeling on
  • Everyone that has access to the channel can "start" a conversation

Is all my threat model data shared with OpenAI?


No, the only data that is shared with OpenAI (Enterprise) are the scenarios and subsequent conversation. Jeff then builds the threat model in IriusRisk which pulls in threats and countermeasures via the rules engine.

Is there traceability about why a specific Threat and Countermeasure appear in my generated threat model?


Yes. Jeff is focused on creating the diagram in IriusRisk as quickly and as easily as possible. It basically saves you time drawing the diagram manually by taking existing design artifacts or allowing you to informally describe what you are building. Once the diagram is created in IriusRisk, the rules engine runs as usual and pulls in and transforms the threats and countermeasures in a deterministic way. There is full traceability, audit logs, references to standards etc that gives you the necessary context as why certain threats and countermeasures have been brought in and why they are in any given state.

ChatGPT is not great at generating threat models, why are you using it to generate my model?


ChatGPT has a ridiculously good understanding of a lot of the context needed to create a threat model, but it does have its biases. If you ask it to create a threat model, you have effectively narrowed down the approach to something generic and STRIDE based. This isn’t because ChatGPT doesn’t know about different system design or architectures, or because it doesn’t know about different cyber or privacy design flaws; it’s because it has a particular interpretation of what it thinks you are asking for when creating a threat model. There are potentially ways around this, by not asking for a threat model so directly, but this isn’t a problem for IriusRisk and Jeff because we don’t use ChatGPT to create a threat model - we use it to create a representation of the system you a threat modeling, then let the rules engine do its thing to create the full threat model.

How does AI extend beyond creating a diagram?


For now it doesn’t. We have strategic objectives this year to use Jeff to enhance the model by providing more metadata such as tags and completing questionnaires. We will also be exploring using AI to help users explore the output of the generated threat model, but it is too early to tell what this would look like. We have no intention of replacing the rules engine in IriusRisk with AI this year.

If we change the component mappings, does the model learn customer components rather than the default ones, how do we seed this to start?


Every time you start a conversation with Jeff, it pulls in the available components into the “Retrieval-Augmented Generation” system (aka RAG) which is used to map elements of the diagram to IriusRisk components. If you have custom components in IriusRisk, these will be able to Jeff. Your milage may very depending on how sensibly the components are named and how well they are described.

If everyone is using the same tenant do we segregate customer content, do we want customer "style/content" impacting other customer content?


For the moment we are using a single-tenant architecture for Jeff. This is to avoid risks with customer data being shared across the instance, especially in the RAG. Of course, OpenAI Enterprise SaaS is used which is multi-tenant. We may switch to a multi-tenant or hybrid architecture at some point in order to keep costs acceptable if we overcome any security or privacy concerns.

Do you store my data?


Our AI assistant Jeff stores the conversation data for the duration of the conversation, and the Retrieval-Augmented Generation system stores component data for the duration of the conversation. OpenAI retains data to help identify abuse for up to 30 days, after which it will be deleted [OpenAI Data Retention] . And of course, once the diagram and threat model is created in your IriusRisk instance, all of the threat model data is stored in your instance.

Does AI learn from my threat model?


No, neither IriusRisk nor OpenAI learns from the use of AI, especially between customers. We may in future enable our AI functionality to learn, but it will purely remain in the context of the customer, and never shared between customers. Our use of OpenAI Enterprise means that your data is not used to train and enhance their LLMs.

Is there a SLA for Jeff


As of June 12th, OpenAI does not offer a Service Level Agreement (SLA) for latency (or any other) guarantees on their various engines, as indicated in their Help Center: Is there an SLA for latency guarantees on the various engines? | OpenAI Help Center. Consequently, we currently do not have an alternative service to ensure continuous availability if OpenAI services experience downtime.

How do you ensure alignment in Jeff's responses?


We have integrated technology within Jeff that rechecks the responses and their format before presenting them to the user. This includes:

  • Focused Content: Jeff will only discuss topics related to Threat Modeling.
  • Component Verification: Jeff will verify that the components used are actual IriusRisk components.
  • Response Format: Jeff will ensure the format of the response is correct.
  • Quality Control: Jeff does not assess the quality of the response; this is the responsibility of the human in the loop.

These measures aim to enhance Jeff's reliability, though they are not 100% foolproof.

What’s the process if a response is not adequate?


If you encounter an inadequate response from Jeff, please follow these steps:

  1. Note the Date and Time: Record the date and time of the response.
  2. Contact Customer Support: Reach out to our customer support team with the details.
  3. Conversation Review: With your permission, we will retrieve the conversation ID from OpenAI to study the interaction.
  4. Continuous Improvement: We will use the example to enhance Jeff’s future performance and behavior.

Your feedback helps us improve Jeff to better meet your needs.

Are there any limitations on Jeff usage?


We want software to be built securely from the design phase, this is why we have blazed a trail with our AI capabilities. More meaningful threat models equals more security by design. We are, therefore, not currently restricting the use of Jeff. There is however a fair use policy to ensure that nobody spoils it for the rest of us. But if you want to create meaningful threat models you will be all good with this fair use policy. Once again please contact your customer success manager who can allay any fears you may have.

Is there a legal document for Jeff?


Yes, there is a specific legal document called the AI Feature Addendum to the IriusRisk Customer Subscription Terms [Template] - Google Docs. This addendum outlines the terms and conditions related specifically to the use of Jeff. You can access and review this document through this link: AI Feature Addendum to the IriusRisk Customer Subscription Terms [Template] - Google Docs. Please refer to this document to understand the legal aspects and any obligations or rights concerning the use of Jeff.

Where is the OpenAI instance located, and can we have any control over its location? Can we use our own Azure OpenAI instance?


The OpenAI instance that powers Jeff is located in Ireland. Due to the use of exclusive OpenAI features specific to our implementation, it is not compatible with Azure GPT-4. As such, we cannot switch to a customer's Azure OpenAI instance.

What is Jeff doing during the moments in the conversation when it appears inactive?


During the demo, you might notice moments when Jeff seems inactive, but it's actively processing several critical tasks. Here's what's happening behind the scenes:

Part I: Input Scenario – After you input a scenario, Jeff sends this data to the LLM for initial processing.

Part II: Analysis and Diagram Generation – Jeff takes the scenario and:

  • Analyzes key elements,
  • Generates a preliminary diagram based on the scenario,
  • Creates generic components. During this time, Jeff might seem inactive, but it is performing complex analysis and generation tasks. The user is then prompted to review and can make any necessary changes, which triggers these steps anew.

Part III: Component Matching – Once the diagram is approved:

  • Jeff requests information about components from the IriusRisk database,
  • It then matches these to the generic components created in the previous step,
  • A refined diagram is generated. This part of the process involves significant data processing, which might not be immediately apparent.

Part IV: Finalization – After the user approves the final diagram, Jeff sends it to IR for completion, including threat modeling. Jeff then waits for the entire process to finalize, which can also appear as an inactive period.

How much does it cost to add Jeff to my IriusRisk instance?


We will absorb the cost of the OpenAI usage for the first year.

Do you check or change the text sent to OpenAI (for example, PII filtering or any other kind of change)?


No, we do not modify or filter the text sent to OpenAI. We only verify the format of the text sent from the user to the LLM to ensure it meets the required structure.

Are there any limitations on Jeff usage?


We want software to be built securely from the design phase, this is why we have blazed a trail with our AI capabilities. More meaningful threat models equals more security by design. We are, therefore, not currently restricting the use of Jeff. There is however a fair use policy to ensure that nobody spoils it for the rest of us. But if you want to create meaningful threat models you will be all good with this fair use policy. Once again please contact your customer success manager who can allay any fears you may have.

Jeff’s Origin. Why Jeff?

Marketing would like to state that the christening of Jeff (IriusRisk's AI) came from a concern that the anthropomorphization of AI bots was a risk and constant worry. That somehow this would render the bot to be synonymous with the Terminator movies and fictitious creations of that ilk - not a tool but a competitor. 

To overcome this we subverted the form by choosing such a normal name so as to render the human-like nature null and void. Imbued with the Ancient Greek principle of bathos of turning the sublime into the trivial and ridiculous. To take the current height of human achievement (AI) and subvert it with the name of a middle-aged man anyone of us could know. We would like to state the above, but we cannot. Someone suggested the name Jeff, we all liked it and went to the pub. We then had to justify the decision to the wider team (see paragraph one).