top of page

FAQs

Frequently asked questions.

  • What does AI governance mean?
    A company using the AI Risk software designatesadministrators. Administrators control the use cases, or AI agents, that team members can use, as well as the AI model (typically Large Language Models or LLMs) applied to each use case.
  • What does AI risk management mean in the context of the AI Risk platform?
    The AI Risk platform covers many separate areas of risk management, for example: AI model risk management. It detects hallucinations, toxic language, and other potential model failures. Confidential information. It detects and blocks (or allows if you want) confidential information, personal identifying information, and secret keys (e.g. your Chat GPT key) from being sent to the external AI model, such as an LLM. Use case. It contains the user to the use case for the specific AI agent.
  • What does AI compliance mean in the contract of the AI Risk Platform?
    Compliance covers a number of areas, including regulatory compliance where applicable. For example, the AI Risk Platform records all conversations as well as the metadata, such as user, time, cost, documents and data accessed, etc. That data can be used by a compliance team for e-discovery, the system administrators to review hacking or data exfiltration attempts, and the AI development team to review user feedback and identify strong and weak points of the process.
  • What does Artificial Intelligence Risk, Inc. (AI Risk) do?
    The company provides a platform for corporate AI that includes governance, risk, compliance and cybersecurity management and facilitates adoption of AI with one day.
  • Is there a company motto, tagline or vision?
    We believe AI will be foundational to business in the near future. We make software to control and monitor AI to make it safe, trustworthy, and compliant.
  • What are some problems with using AI at a company without using a platform like this?
    There are a few problems. The AI does not report to anyone, so if it gives a wrong answer, who do you blame? If it does something well, will people find out? Also, only the user knows the prompts and the completions. Recording those may be required for regulatory purposes and should be for best practices. Also, how do you govern what data the AI can see and protect your confidential information and your client's personally identifiable information? AI needs governance, risk, compliance, and cybersecurity management.
  • What does AI governance mean in this case?
    A company using the AI Risk software designates administrators. Administrators control the use cases, or AI agents, that team members can use, as well as the AI model (typically Large Language Models or LLMs) applied to each use case.
  • How do you set up an AI agent in the AI Risk platform?
    Administrators set up an agent by choosing several features. Some of those features are: For each AI agent, choose The AI model to use, for example choosing among Chat GPT 3.5 (lower cost), Gemini, Llama 2, or a custom internal model. The initial AI system prompt, or metaprompt. The metaprompt is a powerful tool that governs what the AI agent is allowed to do, such as only search a designated set of documents without searching the internet for more information. Whether to allow including documents and data, and if so designating specific documents and data in advance and also allowing the user to add documents and data (e.g. through drag and drop). Whether to block company confidential information from the AI agent Whether to block all or only some personally identifiable information from the AI agent Which prompt screens to enable that block: toxic language, hacking attempts, Which completion screens to enable, including hallucinations, confidential data, toxic language, etc.
  • What are some examples of AI Agents built into the AI Risk platform today?
    The platform has the following agents built in and we are creating more constantly. Here are some highlights. An agent that answers these questions using this document! An agent that uses a database extraction to answer complex questions and create charts and tables from the data. (We have a sample of refinery emissions data provided by Ecolumix, Inc. that we are using.) Employee handbook question and answer agent. Sales email writer using a Linked-In file copied by the user and prior successful pre-loaded sales emails set by the administrator.
  • Does the AI Risk platform slow down the process?
    We have optimized the AI Risk platform to run in parallel, so typical lag is 0.2 seconds on the prompt and similar on the response. Note that the system currently waits for the LLM to complete the answer before checking it, so users will not see the answer scroll across the screen as with some other platforms.
  • Why do I need more cybersecurity for using AI NLP systems, like LLMs or Chat GPT?
    Cybersecurity for LLMs is different from all other types of cybersecurity. A hacker or rogue employee can use different styles of attacks, such as do anything now or “DAN-style” attacks, prompt injections, etc. to try to exfiltrate confidential data, reveal training data, or jailbreak the AI. By providing an additional layer of cybersecurity protection in addition to whatever is built into the LLM, you are adding an alarm system to your property as well as locking the door.
  • What are some benefits of using the AI Risk platform?
    Benefits include: Transparent governance of AI agents and company data and documents Effective risk management, including screening for confidential information and AI specific risks such as hallucinations Integrated compliance, including archiving prompts and completions and protecting against unauthorized release of information and integrating into third-party compliance systems. Layered cybersecurity, like having your doors locked and an alarm. Live dashboards, monitoring activity with your AI platform 24/7 looking for problems, including hacking attempts. Forward compatible with future LLM releases.
  • We are a three thousand person company. Can AI Risk, Inc. handle that?
    Absolutely we can handle big companies. We might recommend starting off with a 50 to 100 person pilot to help develop your use cases (AI agents) before rolling it out to everyone. We do offer a training session during the onboarding process.
  • Who are the Founders of AI Risk, Inc.?
    The Founders of AI Risk, Inc. are Alec Crawford and Frank Fitzgerald. Alec worked on Wall Street for decades, including at such firms as Goldman Sachs and Morgan Stanley. Nevertheless, he is originally a computer scientist from Harvard who did his undergraduate thesis on Artificial Intelligence, writing code from scratch to build neural networks and expert systems. He was one of the first people to posit “composite AI”.His most recent role before founding AI Risk, Inc. was as a Partner and Chief Risk Officer for Investments at Lord, Abbett, an asset manager privately held by its Partners. Frank was hired on O’Shaugnessy Asset Management as a developer and quickly was tapped to be CTO. He then automated away so much work, they made him COO! An award-winning founder, this is his third startup.
  • AI Risk is a startup, right? How can we be sure they will be here tomorrow?
    The Founders have raised a significant amount of capital in the first round of funding and anticipate being able to raise future capital if necessary as the firm grows, nevertheless, we have a low cost base and have plenty of money to be able to run for years even if we had no income from clients. It is possible that we are cash flow positive or even profitable some time in 2024, which is great for a startup!
  • I’d like to invest in your company, can I?
    Unfortunately, unless you are a venture capital firm, probably not. If you are, feel free to reach out to the CEO to chat.
  • I’d like to work at AI Risk, Inc. How do I apply for a job?
    We do not have any formal job openings right now. Most of the people who work for us are known to us personally or are referrals from people we trust.
  • When was AI Risk, Inc. founded?
    AI Risk, Inc. was founded by Alec Crawford in July, 2023 and Frank joined soon afterwards.
  • How many people work at AI Risk, Inc.?
    Currently, the two Founders, plus four other people working on contract for a total of six people.
  • Where is AI Risk, Inc. based?
    AI Risk, Inc. is a Delaware corporation. Alec lives in Connecticut and Frank lives in Zurich.
  • How much does the AI Risk platform cost?
    $19.95 per person per month for the basic version. The compliance engine is an additional minimum of $500 per month depending on the total number of users.
  • Is there an API version of the AI Risk platform?
    Yes, there is an API version, please contact us for more information. It is most appropriate for thousands or even millions of queries per day where the client wants to block from the users things like toxic language and hacking attempts.
  • What types of personally identifiable information can be blocked from usage?
    We currently block hundreds of types of personally identifiable information (PII) from around the globe, from US Social Security numbers on. Note that for specific AI agents, the administrator can allow usage of specific type(s) of PII if that is needed for the specific use case by finding them in the appropriate window and unchecking them.
  • How can AI Risk, Inc. keep my other confidential information and data safe?
    The platform does this in several ways. For example, if there are specific documents you do not want users to upload, we can create a custom confidential document deterctor trained on your internal documents to detect and block them from being used or uploaded. In addition, we offer the ability to use a sandboxed Llama 2 LLM where information is erased immediately after being used to generate a prompt completion.
  • What types of hacking attacks are there for LLMs specifically?
    Two main types of attacks are do-anything-now attacks and prompt injection. DAN style attacks attempt to convince the LLM to do things it should not by claiming to be a developer or another ruse. Prompt injections attempt to put the LLM into a “debug” mode or something similar where the user can get around safety and compliance rules. The AI Risk software attempts to detect and block these before they get to the LLM.
  • How many users can use the AI Risk platform simultaneously?
    As we have deployed the AI Risk platform to Azure, we can handle thousands, if not a million, simultaneous users.
  • Can we get a deployment of the AI Risk platform in our own cloud instance?
    Yes, we can deploy a version of the AI Risk platform in your own cloud instance. This is very easy for Azure, as that is what we use, but also pretty simple for us to use a different cloud provider. There is a modest setup fee for this feature.
  • Can we get a deployment of the AI Risk platform in our own private cloud?
    Yes, we can deploy a version of the AI Risk platform in your own provatecloud. This is more complex than a standard cloud provider, as we will need to engage with your technology team around items such as your firewall and other cybersecurity software. There is a setup fee for this feature.
  • Is there a free trial of the software?
    We will typically grant a free trial for a week for up to ten users. If instead you decide you want a larger trial, we offer a one-month money-back guarantee if you decide you do not like it and cancel service.
  • Is there a discount for a longer contract, like a year?
    Yes, we will offer discounts for longer contracts. Please contact us for more information.
  • I am a small company, why should I buy this software?
    As well as the benefits to all companies from governance, risk, compliance, and cybersecurity management, using the AI Risk, Inc. platform at smaller companies will allow each of your employees to use generative AI to help them in their job. Salespeople can write emails faster. Marketing people can develop campaigns better and faster. Research people can upload documents and summarize them or create draft research reports. Employees in general can ask questions and get the answers immediately from your uploaded employee manual. Not only will your team be more productive, this system will prevent key confidential information from being leaked, accidentally or intentionally. It can block confidential, personal, and secret information (including items from programming code like secret keys used by Google, Open AI, etc.) The amount of efficiency you will gain at your company by safely adopting the AI Risk platform will really be worth it. And, we can set you up in one day! It is also relatively inexpensive as the Founders would like everyone to have safe access to this technology.
  • I am a manager at a large company, why should I buy this software?
    People are using generative AI whether you know it or not. It is better to have that process controlled and monitored for safety and efficacy by the company. In addition, as the AI Risk platform is available at a low cost, it can save your company huge amounts of money versus building it yourself or cobbling together four different software packages to do what the AI Risk platform can do on its own. Of course, you get governance, risk, compliance, and cybersecurity management with this software. Cybersecurity for LLMs is a new area and having an additional layer of cybersecurity is like locking your doors and then turning on the alarm. We offer 24/7 monitoring of LLM activity by your company.
bottom of page