How To Bring Generative AI To your Enterprise Without Compromising on security?

Read Time: 8 minutes

Sign up for Newsletter Follow us : 

ChatGPT has undeniably captured the collective imagination, generating substantial attention and fervent discussions. Notwithstanding the considerable buzz surrounding ChatGPT in recent months, companies have chosen to approach its implementation with circumspection, considering the potential implications and risks (security, GDPR and more). In this blog, we will delve into the factors leading to restricting the use of ChatGPT in enterprises. We will explore alternative models to take advantage of the fantastic potential offered by Generative AI without compromising on security. 

Consumerization Of IT And the Security Risks  

Maintaining a clear boundary between personal and professional life is crucial. While some personal tools may prove useful in a professional setting, it is important to consider that they were not initially built with the privacy and security requirements of the professional environment in mind. Let's look back at the major revolution brought about by the introduction of the Apple iPhone in 2007. It marked a significant revolution as prior to its arrival, phones like Nokia and Blackberry dominated the market. However, the iPhone's arrival brought about a paradigm shift with the emergence of smartphones. With its unique features, everyone wanted an iPhone. However, the integration of personal apps and emails on these devices blurred the boundary between personal and professional life, posing risks such as information leakage and reduced control over data. 

Similarly, the advent of ChatGPT can be seen as a remake of the iPhone era. Originally designed for personal use, ChatGPT proved to be highly productive and appealing for professional tasks as well. However, it is important to note that ChatGPT was not explicitly developed to cater to the demands of the professional environment. It lacks certain features such as security, data privacy, accurate legal citations and up-to-date information.  

While ChatGPT built using large-language models (LLMs), offers much promise, its implementation in today’s enterprise should be weighed carefully. Let’s talk further about how companies are reacting towards ChatGPT. 

What Are Companies Saying? 


The enthusiasm for ChatGPT skyrocketed shortly after its inception, captivating users within a mere week of its creation. With an astonishing growth rate, ChatGPT has already amassed an impressive user base of 100 million individuals, accomplishing this remarkable milestone in just two months. 

As the year 2023 unfolds, ChatGPT is projected to generate a substantial revenue of $200 million. However, its upward trajectory shows no signs of slowing down. In a recent pitch to investors, OpenAI anticipates that by the end of 2024, ChatGPT will soar even higher, with an estimated revenue of $1 billion, solidifying its position as a dominant player in the market. 

Even though OpenAI predicts a high growth in revenue, several major companies have implemented restrictions on the use of ChatGPT and other generative AI tools for their employees. Big Companies like Apple, JP Morgan Chase, Deutsche Bank, Verizon, Accenture have banned the use of such tools due to enterprise data privacy concerns.  

According to CNET, the Samsung memo said, "HQ is reviewing security measures to create a secure environment for safely using generative Al to enhance employees' productivity and efficiency. However, until these measures are prepared, we are temporarily restricting the use of generative Al," 

CNN recently reported that JPMorgan Chase instructed its employees to refrain from entering sensitive information into OpenAI's freely accessible chatbot, citing compliance concerns with third-party software. The potential risks associated with daily usage of tools like ChatGPT were also highlighted by Nozha Boujemaa, the VP for Digital Ethics at IKEA, as reported by AI Business. These cautionary notes reflect the need for careful consideration and risk assessment when utilizing AI chatbot technologies in sensitive or regulated environments. 

3 Major Concerns For Enterprises: 

🛡️ Data Privacy and safety: Data safety and compliance with Data Privacy Laws are major concerns when incorporating information into prompts. Camilla Winlo, Head of Data Privacy at Gemserv, draws a parallel to Amazon's Alexa, highlighting how humans were employed to check conversations for expected responses. This illustrates the need for vigilance in ensuring AI systems handle data appropriately. Avast, in an article, further emphasizes that once a model learns from user data, deleting that information becomes extremely challenging, if not impossible at present. These warnings underscore the importance of robust privacy measures and data handling practices in AI technologies. 

🤥 False information generation and the issue of hallucinations: These are significant concerns associated with ChatGPT. Many of us have come across instances on social media where ChatGPT has provided responses containing false information or nonsensical content. It is crucial to remember that such responses should never be taken as factual or reliable. “Roughly speaking, the hallucination rate for ChatGPT is 15% to 20%”, Peter Relan, co-founder and chairman with Got It AI says.  

🔎 No citations, Plagiarism, and Intellectual property: The AI model may sometimes provide inaccurate information, lack proper citations, and fail to provide the most up-to-date information. Additionally, the question of content ownership generated by large language models (LLMs) like ChatGPT remains a topic that requires further exploration and clarification. 

ChatGPT is already used in Enterprise


Fishbowl, a professional networking app, recently conducted a survey revealing intriguing insights into the adoption of AI tools like OpenAI's ChatGPT in the workplace. The survey, which received responses from more than 11,700 employees across renowned companies such as Amazon, Google, IBM, JPMorgan, Meta, and Twitter, sheds light on the prevalence of AI tool usage in work-related tasks.  

According to the survey findings, a significant 43 percent of employees admitted to incorporating AI tools like ChatGPT into their workflow. Notably, among this group of AI users, an astounding 68 percent revealed that they have not yet disclosed their utilization of ChatGPT to their superiors.  

As a C-level executive, it is crucial to recognize that implementing ChatGPT without careful consideration can jeopardize your company. The survey, conducted on the Fishbowl app from January 26 to 30, highlights the growing reliance on AI tools as employees seek innovative ways to streamline their work processes.  

ChatGPT undoubtedly offers numerous advantages to employees, supporting them in various daily tasks such as composing emails, crafting LinkedIn posts, and efficiently processing large amounts of information to facilitate informed decision-making. However, it is crucial to recognize that in certain instances, employees may inadvertently share sensitive information that should remain confidential and not be accessible to the AI model. 

Given the current state of ChatGPT not being enterprise-ready, it becomes crucial to address privacy and security concerns as employees are using these tools even after the restrictions. The question arises: How can we harness the productivity-enhancing capabilities of tools like ChatGPT in our companies without exposing them to significant risks? What choices do we have? 

In Short: In the Fishbowl survey, it was found that 43% of employees admitted to using tools like CHATGPT, with 68% of this group revealing that they hide it from their bosses. These findings indicate that simply banning such tools may not be an effective solution, and so is depriving employees from a productive workplace is! What are our choices then?  

Alternative Model To Make Generative AI Enterprise Ready 

With a strong focus on privacy and security, Konverso take the best of the Generative AI, and we combine it with our solutions and interface with your data, knowledge and apps  

Access secured, GDPR compliant Generative AI models  

As an enterprise, you are very mindful of GDPR, security and ethics. Konverso leverages the best Generative AI models that are designed with compliance, privacy, and security in mind. For instance:  

  • Dedicated Training Data: We exclusively use customer-provided training data to finetune the customer's model, ensuring that it doesn't contribute to the training or improvement of any Generative models. 
  • Rigorous Content Filtering: All data submitted to our service undergoes thorough content filtering and processing. We do not store prompts or completions in the model, nor do we use them to train, retrain, or improve our models. 
  • User-Controlled Data Deletion: Fine-tuned models can be deleted by users at any time and are securely stored within the same region as the resource. Prompt and completions data may be temporarily stored in the same region as the resource for up to 30 days. This encrypted data is only accessible to authorized employees for debugging purposes or investigating patterns of abuse and misuse. 
  • Uncompromising Customer Data Privacy: We do not use customer data to train, retrain, or improve the models. 
  • Proactive Abuse Prevention: Our synchronous content filtering and retention of prompts and completions for up to 30 days enable us to monitor any content or behavior that suggests a violation of our product terms. This vigilance ensures that our service is used responsibly and ethically.

Our mission is to provide solutions that orchestrate generative AI with enterprise data, knowledge and apps to create unparalleled value for our customers. 

product-routingOur solutions include chatbots, virtual assistants, and agent assist capabilities that empower contact center agents to provide superior customer support. We also extend our support to various other personas within the enterprise, seamlessly integrating AI into their daily work. By prioritizing security and delivering practical applications, we strive to make AI an integral and beneficial part of enterprise workflows and processes such as employee experience (EX) and customer experience (CX). 

Konverso provides value by helping enterprises leverage large language models (LLMs) while minimizing the associated risks. Konverso recognizes the importance of privacy and security in enterprise environments and is actively working to bridge the gap between LLMs and enterprise readiness. By implementing robust privacy and security measures, Konverso aims to ensure that enterprises can benefit from LLM technology without compromising their sensitive data or exposing themselves to undue risk. 


To learn more, book a demo now!

Book a meeting!