• Skip to main content
  • Skip to primary sidebar

utterworks

  • Home
  • Solutions
    • Natural Language Assessment
    • Fluent One – Natural Language Platform
    • Fluent Find – Natural Language Search
    • Fluent Converse – Conversational AI
  • About Us
    • About Utterworks
    • Meet the Directors
    • Contact Us
  • Blogs

jeremy.orr

06/01/2023

Text Summarisation

The same pre-trained deep learning model architecture used for text classification and entity recognition is also used to create our powerful text summarization feature, tuned specifically for summarizing call or chat transcripts. Text summarization is the process of automatically generating a shorter version of a piece of text that retains the most important information from the original. In the case of call or chat transcripts, this involves identifying the key points and topics discussed in the conversation and generating a summary that captures the essence of the conversation. The pre-trained summarization model is able to identify the most important information in the text and use it to generate a concise summary. This feature is extremely useful for quickly and easily understanding the content of long or complex conversations.

There are numerous benefits that a customer service organization can derive from being able to summarize contact transcripts using natural language processing (NLP) techniques including:

  1. Improved efficiency: Summarizing contact transcripts can help customer service organizations process and analyze large volumes of text data more efficiently, as it allows them to quickly extract the key points and main themes from the transcripts.
  2. Enhanced customer understanding: Summarizing contact transcripts can help customer service organizations better understand the needs and concerns of their customers, as it allows them to identify patterns and trends in the feedback and inquiries they receive.
  3. Improved decision-making: By summarizing contact transcripts, customer service organizations can gain insights into the types of issues and problems that are most commonly reported by their customers, which can inform decision-making around resource allocation and product development.
  4. Enhanced customer experience: By providing customers with concise summaries of their interactions with customer service, organizations can improve the overall customer experience and demonstrate their commitment to meeting customer needs.
  5. Reduced cost: Summarizing contact transcripts can help customer service organizations reduce the time and resources required to process and analyze large volumes of text data, which can lead to cost savings. This could include reductions in AHT as advisors are no longer asked to spent time creating a contact summary as part of a “call wrap”

06/01/2023

PII Anonymisation

Natural language processing (NLP) models can be used to very effectively anonymize personally identifiable information (PII) in order to protect the privacy of individuals and comply with privacy regulations. Anonymization is the process of removing or obscuring PII from a text or other data source in such a way that the individual can no longer be identified.

There are several ways in which NLP models can be used for anonymization, including:

  1. Named entity recognition (NER): NER models can be used to identify and extract named entities (such as names, addresses, and phone numbers) from a text, and then replace these entities with placeholders or pseudonyms to anonymize the text.
  2. Redaction: NLP models can be used to identify sensitive information in a text and redact it (i.e., obscure it or remove it entirely) in order to protect the privacy of individuals.
  3. De-identification: NLP models can be used to de-identify texts by replacing or removing specific pieces of information (such as names, addresses, and phone numbers) that could be used to identify an individual.
  4. Pseudonymization: NLP models can be used to create pseudonyms for individuals by replacing their names with unique identifiers that cannot be traced back to the individual.

Overall, the use of NLP models for anonymization can help organizations protect the privacy of their customers or clients and comply with privacy regulations.

Fluent One uses pre-trained deep learning models to offer the ability to anonymize personally identifiable information (PII) in a piece of text. This feature uses the same pre-trained deep learning model used for entity recognition, but is trained to specifically identify and anonymize PII in the input text. The model is able to identify PII such as names, addresses, phone numbers, email addresses, and customer reference numbers and configure a number of different anonymisation techniques to protect the individual’s privacy, whilst keeping meaningfulness in the anonymised text. The techniques include:

  • Replacing with a fixed value
  • Replacing with a fake value
  • Masking
  • Removing 
  • Hashing

The Fluent One PII feature also provides a servive to reverse the anonymisation process – ideal for utilising a less secure 3rd party cloud service from a secure internal service (anonymise to pass to the cloud service and then reverse anonymisation to further process in secure environment)

06/01/2023

Entity Recognition

Entity recognition, also known as entity extraction or named entity recognition (NER), is a natural language processing (NLP) task that involves identifying and classifying named entities in a text into predefined categories such as person names, organizations, locations, and so on.

Entity recognition is useful for a variety of applications, including information extraction, text summarization, and question answering. Some specific examples of how entity recognition models can be used include:

  1. Information extraction: Entity recognition models can be used to extract structured information from unstructured text, such as extracting the names of people and organizations mentioned in a news article.
  2. Text summarization: Entity recognition models can be used to identify the most important entities mentioned in a text and use them to generate a summary of the text.
  3. Question answering: Entity recognition models can be used to identify the entities mentioned in a question and use them to retrieve relevant information from a database or other source.
  4. Chatbots: Entity recognition models can be used to understand the entities mentioned in a user’s input and use them to generate an appropriate response.
  5. Customer service: Entity recognition models can be used to identify the entities mentioned in customer inquiries and use them to route the inquiries to the appropriate customer service representative or to automatically generate a response.

06/01/2023

Text Classification

Text classification is a common task in natural language processing (NLP) where the goal is to assign a text document or a sequence of words to one or more predefined categories based on its content. Text classification can be used for a wide range of applications, including sentiment analysis, spam detection, topic classification, and language identification.

One of the main benefits of text classification is that it can help automate the process of sorting and organizing large volumes of text data. For example, a text classification model could be used to automatically classify emails as spam or not spam, or to categorize customer reviews by sentiment (positive, negative, or neutral).

Text classification models can also be useful for identifying patterns and trends in text data. For example, a text classification model could be used to analyze social media posts to identify common themes or to identify the topics that are most frequently discussed in a particular community.

Overall, text classification models can help organizations more efficiently process and understand large amounts of text data, and can be used to support a variety of business and research objectives.

One possible feature for training text classification NLP models is the ability to input a large dataset of labeled text data. The text data should be organized into distinct classes, with each piece of text belonging to a single class. The feature would then use this dataset to train a machine learning model that can take in new pieces of text and predict which class they belong to. This could be done using a variety of techniques, such as support vector machines, decision trees, or deep learning neural networks. The trained model could then be used to classify new text data with high accuracy.

06/01/2023

Batch Inference

Performing inference in batches can provide a number of benefits. First, it can improve the efficiency and speed of the inference process. When performing inference on a large dataset, it can be computationally intensive to process each piece of data individually. By grouping the data into batches and performing inference on the entire batch at once, you can take advantage of parallel processing and other optimization techniques to speed up the process. This can help you process data more quickly and efficiently, which can be particularly important in applications where real-time performance is critical.

Second, performing inference in batches can also help you reduce the overall computational cost of the inference process. When performing inference on large datasets, the cost of computation can quickly add up, especially if you are using expensive hardware or cloud-based services. By performing inference in batches, you can reduce the number of individual computations that need to be performed, which can help you save on computational resources and lower your overall cost.

Third, performing inference in batches can also improve the accuracy and consistency of the inference process. When working with complex models, it can be difficult to ensure that the model is applied consistently to each piece of data. By performing inference in batches, you can ensure that the same model is applied to all of the data in the batch, which can help you achieve more consistent and accurate results. This can be particularly important in applications where the quality of the inference results is critical.

06/01/2023

Analytics

There are several reasons why you might want to track analytics from operational predictions made by a text classification model. First, tracking analytics can provide valuable insights into the performance of the model. This can include metrics such as the accuracy of the model’s predictions, the speed at which it processes data, and the number of predictions it makes over a given period of time. These metrics can help you understand how well the model is performing and identify areas for improvement.

Second, tracking analytics can also help you monitor the impact of the model on your business or organization. For example, if the model is being used to classify customer feedback, tracking analytics can help you understand how the model’s predictions are affecting customer satisfaction and loyalty. This can help you make more informed decisions about how to use the model and optimize its performance.

Third, tracking analytics can also provide valuable data for testing and evaluating the model. For example, you could use analytics data to compare the performance of different versions of the model or to evaluate the effect of different parameters on its accuracy. This can help you make more informed decisions about how to develop and improve the model over time.

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 5
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Conversational AI and Customer Service
  • Customer self-service is hard too
  • Customer Service is hard
  • We need to talk about search
  • Re-think your metrics
  • Covid-19 and NLP
  • Can NLP enhance RPA?
  • We love messaging
  • Multi-label Text Classification using BERT – The Mighty Transformer
  • Train and Deploy the Mighty BERT based NLP models using FastBert and Amazon SageMaker

About This Site

Jump onboard with us as Natural Language Processing takes off

Copyright © 2023 UTTERWORKS LTD Company no: 12186421 Registered in England and Wales · Privacy