Everything does not require llm: A framework to evaluate when AI means

by SkillAiNest

Join our daily and weekly newsletters for the latest updates and special content related to the industry’s leading AI coverage. Get more information


Question: What product should machine learning (ML) use?
Project Manager Answer: Yes.

Putting the joke aside, the arrival of Generative A has promoted our understanding of which use issues give itself the best loan to the ML. Historically, we have always taken advantage of the ML for predictions, predictable, in consumer experiences, but now, it is possible to take advantage of a form of ML without full training datastas.

However, the answer to this question “which customer needs AI solutions?” Yet there is not always “yes”. Large models of language (LLM) can still be prohibited for some, and as all ML models, LLMs are not always correct. There will always be issues of use, where taking advantage of the ML’s implementation is not the right way. As a AI Project Manager, how do we evaluate the needs of our customers for the implementation of AI?

Key reservations to help make this decision include:

  1. Inputs and outputs needed to meet your customer’s needs: The customer is provided with an input to your product and the output is provided by your product. Therefore, for the playlist (an output) from Spatif ML, the inputs may include consumer preferences, and ‘like’ like ‘songs, artists and music gender.
  2. Combination of inputs and output: Customer needs may vary based on whether they want the same or different output for the same or different input. The more configuration and combination we need to copy for input and output, the scale, the more we need to approach us to the ML vs principle -based system.
  3. Samples in input and outpts: Samples in the combination of inputs or outputs help you decide what kind of ML model you need to use USE. If there are samples of input and output combination (such as reviewing customer stories to achieve emotion scores), consider more monitoring or semi -monitoring ML models from LLM as they can be more cost -effective.
  4. Cost and precision: LLM calls are not always cheaper on a scale and outputs are not always precise/precise despite excellent toning and quick engineering. Sometimes, you are better with the supervision models for nerve networks that can classify input using a fixed set of labels, or even rules -oriented systems, rather than using LLM.

I gathered a quick table below, which presented the aforementioned reservations, to help project managers evaluate the needs of their customers and determine if the implementation of the ML looks like ahead.

Customer is requiredExampleML Implementation (yes/no/depends)Type of ML’s implementation
Repeatedly working tasks where a user needs the same output for the same inputAdd my email to different forms onlineNotRule -based system formation is more than your results to help your results
Repeatedly working tasks where a user needs different output of the same inputThe customer is in “Discovery Mode” and expects a new experience when they take the same action (such as signing to an account):

– Prepare a new artwork per click

B (b (b (Stimble Apun (Remember?) Discover a new corner of the Internet through a random search

YesImagimim Generation LLMS

– Advisory algorithm (filtering with mutual cooperation)

Repeatedly working tasks where a user needs the same/similar output for different inputsGrading of someone’s subjects
CUSTOMER COOKING STAPS NEVER SHOULD Customer Opinions
Depends onIf the number of input and output combination is quite simple, a precise, rules -based system can still work for you.

However, if you start to have a multiple combination of input and output because the rules -based system cannot measure effectively, consider it tilt:

CLA Classifier
OPT Topic Modeling

But only if there are samples of these inputs.

If there is no sample, consider taking advantage of the LLM, but only for once the scenario (because the LLMS monitoring model is not exactly).

Repeatedly working tasks where a user needs different different output of different inputsCUSTOMER Customer Supporting Answer Questions
Find Search
YesIt is rare to come to these examples where you can provide different outputs of different inputs on a scale without ML.

There are a lot of permits for the effectively effective scale of the rules -based implementation. Consider:

Elms with Retive Recovery Generation (RAG)
Decide on trees for products like Search Search

Non -reptive work with different outputA hotel/restaurant reviewYesPre -LLMS, this type of scenario was difficult to meet without models that were trained for specific tasks, such as:

RECRENTRETRENTRENT NURRENT NEWS (RNNS)
Long -term memory Networks (LSTMS) to predict the next word

LLMs are a great fit for this type of scenario.

The lower thing: Do not use Light Ceber when a simple pair of lights can move. Evaluate your user’s need for accurate, cost -effective product construction, implementing implementation costs and output precision, using the above matrix.

Shirinia Rao is a fantasy group product manager. The views in this article are the author’s and not necessarily of their company or organization.

You may also like

Leave a Comment

At Skillainest, we believe the future belongs to those who embrace AI, upgrade their skills, and stay ahead of the curve.

Get latest news

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 Skillainest.Designed and Developed by Pro