They have their own opinions expressed by business partners.
AI, although for decades, has been established as discipline in computer science, in 2022 became a buzzard with the appearance of Generative AI. Despite the maturity of the AI ​​as a scientific discipline, big language models are very ignorant.
Businesses, especially those who are technical backgrounds, are eager to enable LLMS and Generative AI to enable their business efforts. Although it is appropriate to take advantage of technological promotion to improve the performance of the business process, it should be done carefully, in the case of AI.
Today, many business leaders are driven by hype and external pressure. From the founders of the Startups seeking funds for corporate strategies that are pursuing the Innovation Agends, the focus is to integrate the latest AI tools as soon as possible. The race towards the integration ignores the important flaws that are below the surface of the Generative AI system.
Related: 3 expensive errors when companies use general AI
1. Deep algorithmic defects in large language models and generative AIS
In simple terms, they have no real understanding of what they are doing, and when you can try to keep them on the track, they often lose the thread.
These systems do not think. They predict. Each phrase made by LLM is produced by a potential token token estimate, which is based on data samples on the data on which they were trained. They do not know the truth, by mistake logic or noise. Their answers may be authentic, yet they can still be completely wrong – especially when working beyond familiar training data.
2. Lack of accountability
The growing development of the software is a well documentary approach in which developers can surpass the requirements and have full control over the current status.
This allows them to identify the main causes of the logical insect and take correctional measures while maintaining consistency throughout the system. The LLMS develops itself slowly, but there is no indication of what is the reason for the increase, what is their last status or their current status.
Modern software is made on engineering transparency and trace. Each function, module and dependence is observable and accountable. When something fails, the logs, tests and documents guide the developer to the resolution. This product is not true for AI.
The weight of the LLM model is well manufactured by vague processes that are similar to black box correction. No one – not even the developer behind them – cannot indicate what specific training input causes a new behavior. It is impossible to debugging. This also means that these models may be unexpectedly harassed or change the performance after bicycles training, which is not available as an audit trailer.
Depending on the precision, forecast and compliance, business account, this reduction of accountability should raise red flags. You cannot control the LLM’s internal logic. You can just see it.
Related: See AI’s profession and approach closely in business
3. The attack of zero day
Zero -Day attacks in traditional software and systems are viable, and developers can fix this risk because they know what they have made and understand the deteriorating procedure that was exploited.
In LLMS, every day is a zero day, and no one can be aware of it, because there is no indication of the status of this system.
Security in traditional computing assumes that the risks can be detected, diagnosed and patch. The attack may be a vector novel, but there is a reaction framework. Not with Generative AI.
Since most of their logic has no prejudice code base, there is no way to identify the main cause of exploitation. All you know is that when it is seen in production, there is a problem. And by then, reputation or regulatory damage can already be done.
Considering these important issues, entrepreneurs should take the following precautionary measures, which I will list here.
1. Use generato AIS in the Sandbox mode:
The first and most important step is that businessmen should use generative AIS in sandbox mode and never integrate their business process.
The merger means that never interfere with the LLM using their APIs with their internal systems.
The term “integration” means trust. You are convinced that the component you connect will perform permanently, maintain its business logic and will not spoil the system. That level of trust is inappropriate for AI tools producing trust. The use of APIS to wire LLM in direct database, operations or communication channels is not only dangerous – this is negligence. It produces holes for misinterpretation -based data leaks, functional errors and automatic decisions.
Instead, treat the LLM like an external, isolated engines. Use them in a sandbox environment where their results can be evaluated before any human or system is followed.
2. Use human monitoring:
As the utility of the sandbox, assign to a human supervisor to indicate the machine, check out the output and reach it to internal operations. You have to stop the machine conversation with the machine between LLMS and your internal systems.
Automation looks effective – unless it happens. When the LLMS outpoints produce other machines or processes directly, you make blind pipelines. No one to say, “That doesn’t look right.” Without human surveillance, even the same deception can increase financial loss, legal issues or false information.
The human loop model is not a barrier-this is a safety.
Related: Artificial Intelligent Large Language Model: Unlimited possibilities, but move forward with caution
3. Never give your business information to Generative AI, and do not think they can solve your business problems.
Treat them like dumb and potentially dangerous machines. Use human specialists as engineers to explain business architecture and solutions. Subsequently, use a quick engineer to ask specific questions about implementation of AI machines – the function through the function – without displaying the overall purpose.
These tools are not strategic advisers. They do not understand the business domain, the nuances of your goals or the space of the problem. What they prepare is about to meet the linguistic pattern, not the solution to the intention.
Based on the purpose, context and decision, business logic should be explained by humans. Use AI to design strategies or to own decisions not only to support execution. Treatment of AI is useful in parts – like a scripting calculator, but never in charge.
Finally, production AI is not yet ready for deep integration in business infrastructure. Its models are ignorant, their behavior is vague, and their dangers are well understood. Businesses must reject the hype and adopt a defense currency. The cost of misuse is not just incompetent – it is irrelevant.
AI, although for decades, has been established as discipline in computer science, in 2022 became a buzzard with the appearance of Generative AI. Despite the maturity of the AI ​​as a scientific discipline, big language models are very ignorant.
Businesses, especially those who are technical backgrounds, are eager to enable LLMS and Generative AI to enable their business efforts. Although it is appropriate to take advantage of technological promotion to improve the performance of the business process, it should be done carefully, in the case of AI.
Today, many business leaders are driven by hype and external pressure. From the founders of the Startups seeking funds for corporate strategies that are pursuing the Innovation Agends, the focus is to integrate the latest AI tools as soon as possible. The race towards the integration ignores the important flaws that are below the surface of the Generative AI system.
The rest of this article is locked.
Join the business+ To reach today.