• Skip to main content
  • Skip to primary sidebar

utterworks

  • Home
  • Solutions
    • Natural Language Assessment
    • Fluent One – Natural Language Platform
    • Fluent Find – Natural Language Search
    • Fluent Converse – Conversational AI
  • About Us
    • About Utterworks
    • Meet the Directors
    • Contact Us
  • Blogs

Uncategorized

06/01/2023

Entity Recognition

Entity recognition, also known as entity extraction or named entity recognition (NER), is a natural language processing (NLP) task that involves identifying and classifying named entities in a text into predefined categories such as person names, organizations, locations, and so on.

Entity recognition is useful for a variety of applications, including information extraction, text summarization, and question answering. Some specific examples of how entity recognition models can be used include:

  1. Information extraction: Entity recognition models can be used to extract structured information from unstructured text, such as extracting the names of people and organizations mentioned in a news article.
  2. Text summarization: Entity recognition models can be used to identify the most important entities mentioned in a text and use them to generate a summary of the text.
  3. Question answering: Entity recognition models can be used to identify the entities mentioned in a question and use them to retrieve relevant information from a database or other source.
  4. Chatbots: Entity recognition models can be used to understand the entities mentioned in a user’s input and use them to generate an appropriate response.
  5. Customer service: Entity recognition models can be used to identify the entities mentioned in customer inquiries and use them to route the inquiries to the appropriate customer service representative or to automatically generate a response.

06/01/2023

Batch Inference

Performing inference in batches can provide a number of benefits. First, it can improve the efficiency and speed of the inference process. When performing inference on a large dataset, it can be computationally intensive to process each piece of data individually. By grouping the data into batches and performing inference on the entire batch at once, you can take advantage of parallel processing and other optimization techniques to speed up the process. This can help you process data more quickly and efficiently, which can be particularly important in applications where real-time performance is critical.

Second, performing inference in batches can also help you reduce the overall computational cost of the inference process. When performing inference on large datasets, the cost of computation can quickly add up, especially if you are using expensive hardware or cloud-based services. By performing inference in batches, you can reduce the number of individual computations that need to be performed, which can help you save on computational resources and lower your overall cost.

Third, performing inference in batches can also improve the accuracy and consistency of the inference process. When working with complex models, it can be difficult to ensure that the model is applied consistently to each piece of data. By performing inference in batches, you can ensure that the same model is applied to all of the data in the batch, which can help you achieve more consistent and accurate results. This can be particularly important in applications where the quality of the inference results is critical.

06/01/2023

Metadata

The Fluent One platform allows for the easy association of metadata with individual labels in any classificaiton model.

The ability to associate metadata with the labels in a text classification model can provide a number of benefits. First, it can make it easier to understand the meaning and context of the labels. For example, if the model is being used for sentiment analysis, the metadata could provide information about the specific emotions or sentiments associated with each label, such as “positive”, “negative”, or “neutral”. This can help users interpret the model’s predictions and understand the reasons behind them.

Second, associating metadata with the labels can also make the model more flexible and adaptable. For example, if the model is trained on a dataset that uses a certain set of labels, but the user wants to apply the model to a different dataset that uses different labels, the metadata can provide a mapping between the two sets of labels. This allows the model to be used in a wider range of applications and contexts without needing to be retrained from scratch.

Third, metadata can also provide useful information for debugging and improving the performance of the model. For example, if the model is not achieving the desired accuracy, the metadata can provide insights into the specific errors that the model is making and suggest ways to address them. This can help users fine-tune the model and improve its performance over time.

Case Study Applicaiton

Working with a large UK utility, Utterworks provided an intent model to recognise customer intent when contacting the utility through any customer service channel. We created meta data for each intent, grouping and mapping them to a “destination” queue that was different for each channel. We also associated a priority used in asynchronous channels (WhatsApp, SMS, email) based on a customer’s propensity to call if a message was not responded to quickly (this had been mapped out by intent using insight generated from unstructured historic data). Further, Utterworks helped the utility use predicted intent to enhance the results from search on their website – using meta data to provide deep link urls to self-service journeys and improve customer experience.

Primary Sidebar

Recent Posts

  • Conversational AI and Customer Service
  • Customer self-service is hard too
  • Customer Service is hard
  • We need to talk about search
  • Re-think your metrics
  • Covid-19 and NLP
  • Can NLP enhance RPA?
  • We love messaging
  • Multi-label Text Classification using BERT – The Mighty Transformer
  • Train and Deploy the Mighty BERT based NLP models using FastBert and Amazon SageMaker

About This Site

Jump onboard with us as Natural Language Processing takes off

Copyright © 2023 UTTERWORKS LTD Company no: 12186421 Registered in England and Wales · Privacy