• Skip to main content
  • Skip to primary sidebar

utterworks

  • Home
  • Solutions
    • Natural Language Assessment
    • Fluent One – Natural Language Platform
    • Fluent Find – Natural Language Search
    • Fluent Converse – Conversational AI
  • About Us
    • About Utterworks
    • Meet the Directors
    • Contact Us
  • Blogs

core-features

06/01/2023

Batch Inference

Performing inference in batches can provide a number of benefits. First, it can improve the efficiency and speed of the inference process. When performing inference on a large dataset, it can be computationally intensive to process each piece of data individually. By grouping the data into batches and performing inference on the entire batch at once, you can take advantage of parallel processing and other optimization techniques to speed up the process. This can help you process data more quickly and efficiently, which can be particularly important in applications where real-time performance is critical.

Second, performing inference in batches can also help you reduce the overall computational cost of the inference process. When performing inference on large datasets, the cost of computation can quickly add up, especially if you are using expensive hardware or cloud-based services. By performing inference in batches, you can reduce the number of individual computations that need to be performed, which can help you save on computational resources and lower your overall cost.

Third, performing inference in batches can also improve the accuracy and consistency of the inference process. When working with complex models, it can be difficult to ensure that the model is applied consistently to each piece of data. By performing inference in batches, you can ensure that the same model is applied to all of the data in the batch, which can help you achieve more consistent and accurate results. This can be particularly important in applications where the quality of the inference results is critical.

06/01/2023

Analytics

There are several reasons why you might want to track analytics from operational predictions made by a text classification model. First, tracking analytics can provide valuable insights into the performance of the model. This can include metrics such as the accuracy of the model’s predictions, the speed at which it processes data, and the number of predictions it makes over a given period of time. These metrics can help you understand how well the model is performing and identify areas for improvement.

Second, tracking analytics can also help you monitor the impact of the model on your business or organization. For example, if the model is being used to classify customer feedback, tracking analytics can help you understand how the model’s predictions are affecting customer satisfaction and loyalty. This can help you make more informed decisions about how to use the model and optimize its performance.

Third, tracking analytics can also provide valuable data for testing and evaluating the model. For example, you could use analytics data to compare the performance of different versions of the model or to evaluate the effect of different parameters on its accuracy. This can help you make more informed decisions about how to develop and improve the model over time.

06/01/2023

Metadata

The Fluent One platform allows for the easy association of metadata with individual labels in any classificaiton model.

The ability to associate metadata with the labels in a text classification model can provide a number of benefits. First, it can make it easier to understand the meaning and context of the labels. For example, if the model is being used for sentiment analysis, the metadata could provide information about the specific emotions or sentiments associated with each label, such as “positive”, “negative”, or “neutral”. This can help users interpret the model’s predictions and understand the reasons behind them.

Second, associating metadata with the labels can also make the model more flexible and adaptable. For example, if the model is trained on a dataset that uses a certain set of labels, but the user wants to apply the model to a different dataset that uses different labels, the metadata can provide a mapping between the two sets of labels. This allows the model to be used in a wider range of applications and contexts without needing to be retrained from scratch.

Third, metadata can also provide useful information for debugging and improving the performance of the model. For example, if the model is not achieving the desired accuracy, the metadata can provide insights into the specific errors that the model is making and suggest ways to address them. This can help users fine-tune the model and improve its performance over time.

Case Study Applicaiton

Working with a large UK utility, Utterworks provided an intent model to recognise customer intent when contacting the utility through any customer service channel. We created meta data for each intent, grouping and mapping them to a “destination” queue that was different for each channel. We also associated a priority used in asynchronous channels (WhatsApp, SMS, email) based on a customer’s propensity to call if a message was not responded to quickly (this had been mapped out by intent using insight generated from unstructured historic data). Further, Utterworks helped the utility use predicted intent to enhance the results from search on their website – using meta data to provide deep link urls to self-service journeys and improve customer experience.

06/01/2023

Group APIs

The group feature allows multiple NL APIs to be grouped together and perform inference in parallel on the same input text. For example the text could be simultaneously classified by multiple text classification models and have key entities recognised by one or more entity recognition token classifiers.

You might want to group inference calls together and run them in parallel on the same piece of text for several reasons. First, running inference calls in parallel can improve the speed and efficiency of the inference process. By splitting the inference workload across multiple parallel processes, you can take advantage of multiple CPU cores or other hardware resources to perform the computations more quickly. This can be particularly useful when working with large or complex models, or when performing inference on a large dataset.

Second, running inference calls in parallel can also help you improve the accuracy and consistency of the inference process. When working with multiple models or inference algorithms, it can be challenging to ensure that they are applied consistently to each piece of data. By running the inference calls in parallel, you can ensure that each model or algorithm is applied to the same input data, which can help you achieve more consistent and accurate results.

Third, running inference calls in parallel can also provide more flexibility and adaptability for your inference pipeline. By grouping the inference calls together, you can easily swap out different models or algorithms without having to rewrite the entire pipeline. This can make it easier to experiment with different approaches and find the best solution for your specific use case.

Case Study Application

Working with a large utility, Utterworks developed several text classification and entity recognition models for use across the utilities customer service channels. Initially we created and intent model to recognise the reason for a customer’s contact, and a model to recognise a customer’s expression of dissatisfaction. Calling both these models in parallel allowed for effecient intent prediction and the opportunity to prioritise contacts where the customer had a complaint. We subsequently added further models to be called simultaneously, one recognising customer vulnerability and another recognising references to the recently introduced government Energy Bills Support Scheme (this model was introduced in one day and improved the accuracy of deflection of customer enquiry to some specific content created to answer customer questions).

Primary Sidebar

Recent Posts

  • Conversational AI and Customer Service
  • Customer self-service is hard too
  • Customer Service is hard
  • We need to talk about search
  • Re-think your metrics
  • Covid-19 and NLP
  • Can NLP enhance RPA?
  • We love messaging
  • Multi-label Text Classification using BERT – The Mighty Transformer
  • Train and Deploy the Mighty BERT based NLP models using FastBert and Amazon SageMaker

About This Site

Jump onboard with us as Natural Language Processing takes off

Copyright © 2023 UTTERWORKS LTD Company no: 12186421 Registered in England and Wales · Privacy