Nine questions to ask your AI vendor and why

  • help_outline

    Machine learning algorithms

    What type of Machine Learning Algorithms does your product use?

  • info_outline

    Why should you ask this?

    There is an age-old quote that says “when all you have is a hammer, all problems look like a nail.” Rather than relying on one tool, having a rich AI tool kit that blends supervised and unsupervised machine learning models with deep learning technology provides broader coverage, detection speed, and security efficiency

  • help_outline

    Volume of machine learning algorithms

    How many machine learning algorithms does your product have and how are they categorized? How frequently do you update them and release new algorithms?

  • info_outline

    Why should you ask this?

    You want an AI vendor that provides broad coverage for attacker behaviors (e.g., C&C, reconnaissance, lateral movement, exfiltration) and depth of coverage for each behavior. This enables you to change the rule that security needs to work perfectly 100% of the time and cyber attackers just need to succeed once. AI with several algorithms across a broad set of attacker behaviors forces the attacker to be perfectly hidden at all times.

    In addition, AI vendors should provide continuous improvement of the machine learning algorithms in their product with detection that understands the entire attack lifecycle.

  • help_outline

    Machine learning times

    How long until machine learning algorithms can trigger detections in a new environment? How many algorithms require a learning period, and how long does that take?

  • info_outline

    Why should you ask this?

    You should expect immediate answers from your AI vendor. Not all machine learning requires an extended period of time to learn and provide answers. If all algorithms require an extended learning period, it is an indicator of unsupervised learning only, which only detects anomalies and will produce a higher volume of alerts that will require manual triage. By leveraging the appropriate forms of machine learning models for attack behavior detection, you will receive both immediate results as well as better signal to noise which will do the most to reduce the workload for your security operations team

  • help_outline

    Risk prioritization

    How does your product prioritize critical and high-risk hosts that require immediate attention from an analyst?

  • info_outline

    Why should you ask this?

    Deciding on where to focus an analysts time and how to respond requires an understanding of the assets impacted and the risk to your business. You want your vendor’s solution to prioritize risk level of hosts, including workloads, servers, and IoT. This ensures that analysts are responding to the highest risks first to reduce the total cost and risk of a breach.

  • help_outline

    Incident Response Workflow

    Does your product integrate seamlessly into existing detection, alerting, and incident response workflow?

  • info_outline

    Why should you ask this?

    You already have incident response processes that includes people, process, and technology. AI should integrate with your incident response process rather than work in an isolated silo. The intelligence from AI products should provide the starting point for further investigation within existing tools and make your entire security infrastructure smarter, improving the efficiency of your existing investment.

  • help_outline

    Third party response integration

    What firewall, endpoint security, or NAC integration does your product provide to block or contain detected attacks? How does your product integrate with these platforms?

  • info_outline

    Why should you ask this?

    Not every attack is the same and not every response should be the same either. An AI platform should provide intelligence to existing infrastructure to reduce the time to respond. More importantly, you should expect the product to enable for the correct response.

    You should expect integration to be simple and straightforward, or it will most likely not happen, defeating any benefits due to a complex deployment. Integration can occur through API’s, outbound events, or automation platforms that provide standardization between products.

  • help_outline

    Workload reduction

    What is the workload reduction your product provides for security analysts? What kind of efficiency increase can be expected?

  • info_outline

    Why should you ask this?

    You should expect AI to augment human analysts, making them smarter and more efficient at their job. This means reducing the workload so an analyst focuses on the events that matter most. Part of that means providing human readable output from machine learning algorithm detections, including guidance on what a detection means, and what are the next steps an analyst should take to verify and respond.

    If the output of the product is complex and it increase the manual load on the analyst because it requires more work to decipher and investigate, then it really isn’t artificial intelligence.

  • help_outline

    Red team testing

    Does your product support running red team exercises to prove the value of machine learning algorithms and AI work in real world scenarios? Will you pay for the red team if your product doesn’t detect anything?

  • info_outline

    Why should you ask this?

    You should always test detection tools in a real world scenarios to ensure it really works when it really needs to.

    Any detection product should understand all the tricks used by an attacker thus understanding how real attacks work. If in a live exercise, the detection technology fails to spot the attacker’s behavior as it happens, how useful is it? Shouldn’t the vendor be responsible for the costs of testing their own product if it turns out be snake oil? If the vendor is confident, this type of guarantee will not be a problem.

  • help_outline

    Remote vendor access

    Do you recommend that human analysts have remote access to the product during the evaluation? Why?

  • info_outline

    Why should you ask this?

    Vendors like to collect metadata from customers for long term tuning of their technology. This metadata provides the vendor a great feedback loop. However, some vendors have been known to use human analysts during product evaluation to perform manual threat hunting and analysis. Their goal is to ensures you receive a report that demonstrates product value, but shouldn’t the AI work on its own? What happens when the analyst is gone after the product is deployed? The goal of AI should be workload reduction and increased efficiency to reduce the time to detect and respond to cyber attacks, not a remote analyst employed by the vendor to make the evaluation look great.

Setting up a demo is quick and easy