They have their own opinions expressed by business partners.
Companies are treating artificial intelligence as Victorian -era physicians treated lectures: it should be applied independently, regardless of the real problem. In some variations of the “AI Strategy we need” at board meetings across the country without asking, “which specific problem are we trying to solve?” The results are under influence under the forecast.
However, here, we are with executives who demand AI solutions for issues that are not available, while ignoring the issues that can really be solved.
This is expensive in ways that are rarely appeared on quarterly reports. Companies put millions of people into AI measures that produce impressive demos and disappointing results. They are writing checks that their data infrastructure cannot cash in cash. And no one feels this sample.
Related: How to Avoid Loss Millions on AI
The first net of technology
The ordinary corporate AI’s journey follows a depressing path. First, an executive attends a conference where rivals are proud of their AI measures. The result of panic. A mandate comes down: “Enforce AI in all departments.” Teams have entered into use of use issues to justify the technology that have already been selected. Advisors arrive with a slide deck. Pilots are launched. Demos have been built. Press releases have been prepared. And a year later, when someone asks about an ROI, each person stares at his shoes.
This backward view of starting with a solution rather than a problem shows why so many AI projects fail. This is like buying an expensive hammer and then walking in search of nails. Sometimes you get them! Most, you discover that screwdrivers are needed in your original problems.
The thing is, the first technology strategy is tremendous headlines but terrifying business consequences. Makes They make a mistake for progress. They appreciate novelty on utility. And often, solutions are more difficult than their eyes.
Hypocritical
There is a curious academic dissatisfaction with the organizations thinking about their data. Ask any technical leader about the quality of his company’s data, and he will be deliberately heated. Nevertheless, companies approve AI projects that are magical to ancient, comprehensive datases in their system.
Machine learning just does not require data. It needs meaningful samples in good data. The algorithm of trained learning on garbage is not intelligent. It becomes extraordinarily effective in producing extremely confident trash.
The disconnection between the reality of data and the AI’s ambitions creates an endless cycle of despair. Projects begin with passionate predictions as to what AI can perform with theoretical data. They end with engineers, explaining why the original data cannot support these predictions. Next time it will be different, each assures himself. It’s never
Related: No one wants another useless AI tool – instead of what to build here
Implementation difference
The world’s most sophisticated AI solution is useless if it is not integrated into the original work flow. Nevertheless, companies invested millions in the algorithm as usual, while allocating about seventeen dollars and thirty cents to use them in fact.
They develop AI solutions that require perfect participation by employees who were not consulted during development, do not understand the models, nor have they been trained to use tools. This is equivalent to applying the formula 1 engine in the car without editing the transmission, then wondering why the vehicle breaks.
See, adopting technology is not a technical problem. He is a human being. Humans are notoriously resistant to changing behavior, especially when the benefits are not immediately clear to them. An AI solution that requires significant changes in the work flu without providing clear, quick benefits. No one wants to acknowledge it, but this is true.
Changing the strategy
What will be the reverse engineered AI strategy? Begin with specific, measuring business issues where the current approach is decreasing. Confirm these issues through a strict analysis, not executive. Assess whether these issues actually need AI or can be better resolved by an easy solution. Consider the organizational changes needed to implement any solution. After that, and only after that, assess which data and technology can solve the verified issues.
A better implementation framework
The general point of view for the implementation of the AII requires to change:
Problems before the solution: Indicate and verify specific business challenges with measurement effects
Data Reality Check: Audit the current data quality and submission process before assuming AI feasibility
Examination of simplicity: Determine whether the problem can be solved more efficiently from an easy, non -AI point of view
Organizational Preparation: Guess whether workflose and teams are ready to connect AI solutions or not
Additional execution: Start with small -scale pilots that are focused on tight, well -defined issues
Related: When should you not invest in AI?
The training of the algorithm on poor data is like building a house on the Quixand. Architecture may be innocent, but when everything is sinking, it doesn’t matter much. Companies proudly announce their AI measures, with almost strategic explanation of the same level as well as converting medieval chemists into gold. The main difference is that the chemists spent less money.
Perhaps the most valuable strategy of AI’s implementation is just changing the question. “How can we use AI?” Instead of asking, try to ask instead of asking, “Which specific issues are able to solve, and can some of them be the right approach?” This reform does not make for important notes of the impressive conference. It does not produce a single press coverage or conference -speaking slot. But this produces solutions that actually work, which seems to be a meaningful purpose for investing multi -million dollars.