Show HN: Cape API Keep your sensitive data private while using GPT-4 https://ift.tt/XQD0a1L
Show HN: Cape API – Keep your sensitive data private while using GPT-4 We’ve built the Cape API so developers can keep sensitive data private while prompting LLMs like GPT-4 and GPT 3.5 Turbo. With Cape, you can easily de-identify sensitive data before sending it to OpenAI. In addition, you can create embeddings from sensitive text and documents and perform vector searches to improve your prompt context all while keeping the data confidential. Developers are using Cape with data like financial statements, legal contracts, and internal/proprietary knowledge that would otherwise be too sensitive to process with the ChatGPT API. You can try CapeChat, our playground for the API at https://ift.tt/6k2dOoh The Cape API is self-serve, and has a free tier. The main features of the API are: De-identification — Redacts sensitive data like PII, PCI, and PHI from your text and documents. Re-identification — Reverts de-identified data back to the original form. Upload documents — Converts sensitive documents to embeddings (supports PDF, Excel, Word, CSV, TXT, PowerPoint, and Markdown). Vector Search — Performs a vector search on your embeddings to augment your prompts with context. To do all this, we work with a number of privacy and security techniques. First of all, we process data within a secure enclave, which is an isolated VM with in-memory encryption. The data remains confidential. No human, including our team at Cape or the underlying cloud provider, can see the data. Secondly, within the secure enclave, Cape de-identifies your data by removing PII, PCI, and PHI before it is processed by OpenAI. As GPT-4 generates and streams back the response tokens, we re-identify the data so it becomes readable again. In addition to de-identification, Cape also has API endpoints for embeddings, vector search, and document uploads, which all operate entirely within the secure enclave (no external calls and no sub-processors). Why did we build this? Developers asked us for help! We've been working at the intersection of privacy and AI since 2017, and with the explosion of interest in LLMs we've had a lot of questions from developers. Privacy and security remain one of the biggest barriers to adopting AI like LLMs, particularly for sensitive data. We’ve spoken with many companies who have been experimenting with ChatGPT or the GPT-4 API and they are extremely excited about the potential, however they find taking an LLM powered feature from PoC to production is a major lift, and it’s uncharted territory for many teams. Developers have questions like: - How do we ensure the privacy of our customer’s data if we’re sending it to OpenAI? - How can we securely feed large bodies of internal, proprietary data into GPT-4? - How can we mitigate hallucinations and bias so that we have higher trust in AI generated text? The features of the Cape API are designed to help solve these problems for developers, and we have a number of early customers using the API in production already. To get started, checkout our docs: https://ift.tt/RzYlEgj View the API reference: https://ift.tt/QmoKSxU Join the discussion on our Discord: https://ift.tt/rNnG0oV And of course try the CapeChat playground at https://ift.tt/6k2dOoh https://capeprivacy.com June 27, 2023 at 02:04PM
No comments