Quick Machine Learning Wins For Startups
Updated: Oct 30, 2020
Machine learning is one of those things that practically every investor seems to ask about these days. The reason is that there is so much information and code available for free that at this point, if we come up with some good strategies, they aren’t years to implement. And we can actually finally improve the productivity (and joy) of knowledge workers in the ways J.C.R. Licklider first laid out in his 1960 paper “Man-Computer Symbiosis.”
Some of the tactics and strategies can be built on top of simple machine learning libraries. These might be tools like python libraries, that could be imported into a web app without much fanfare. Others are much more laborious and/or expensive to develop and compute. All will grow in terms of both complexity and training (and hopefully end user benefit) as our apps mature.
A Rough Look At Implementation Strategies
This article looks at a few machine learning techniques that can be pretty universally applied for a lot of tools that use large blobs of text inputs. These can easily be summarized in a few options:
Score text entered into a field upon entry. This provides a score that could be used to sort, look for anomalies, and even color-code feedback (typically in larger varchar fields).
A Proof of concept might be planning 1 sprint with a goal of getting a score into the database per-record. This information doesn’t have to be presented to users yet, but should allow developers to refine the follow-up estimates in order to do so and help product teams conceptualize how to leverage sentiment scoring in various parts of the app.
Implementation should plan for a minimum of 2 sprints. One to allow users to train the model. For example, provide a list of words that the system doesn’t know how to score and have users enter a score. The second sprint would be to implement how the screens appear and how we help make faster decisions based on that (e.g. color coding of score on screen as we’ll review later in this article). Once the initial implementation is complete we should have a cooling off period to get a better understanding of what other options we might build and the cost in terms of development, compute time, and training required to go further.
Predict the appropriate category for records.
Proof of concept could be to vectorize all existing categories/titles of articles/pages/records provided at the time of implementation into a table and run simple scripts to see how accurate predictive categories are.
Implementation might include planning for a sprint to surface category selection in the interface, another to allow users to help train the model, and then a cooling off period to capture performance data and plan what a larger amount of work might look like.
Here, we predict tags for information as it’s being entered. This is similar to efforts in categorization, so should heavily impact the cyclomatic complexity of a given app; however, given the way tags are often implemented could be much less performant and require more work on the front-end to make as interactive as many might expect (especially given how well tool like LinkedIn have implemented tagging systems).
Proof of concept could be to again vectorize all tags provided at the time of implementation into a table and run simple scripts to see how accurate predictive tags are and have product teams and/or subject matter experts help train models.
Implementation plans should account for a couple of sprints to surface tagging in the interface and then a cooling off period to capture performance data and user research around how users of the product would like tagging to be improved upon.
Similar Items is a fairly well documented outcome that predicts which items are similar to others. For example, a record that has similar characteristics to another record or aspects that are similar to others. Like a similar posts option in a blog or similar articles on a news site.
Proof of concept would be slightly more complicated than the previous options. The PoC would involve using a tool like ELKI (https://elki-project.github.io) to match similar items and so the PoC would be a fork of a web app where ELKI is imported and requirements determined.
Implementation is based on the complexity of the app and how we choose to surface findings in our interfaces. Many blogging systems like Wordpress have this as a plugin, but that isn’t incredibly complicated from the machine learning side. To get much more accurate and useful results models need a fair amount of refinement.
We’ll start with Sentiment Analysis, as given the logical layout of the many a web app, this would provide the highest return on investment (ROI) from development efforts.
Sentiment analysis is the automated analysis of content like text or speech as positive, neutral or negative. That positive or negative analysis can be considered looking at the polarity of the words and a result of sentiment analysis is often a polarity score. This type of scoring allows organizations to better understand how constituents see them. This might mean analyzing Twitter or Facebook to ascertain how people feel about a given subject, or tagging a sentiment score on blog posts or comments on posts to determine how people feel about a given topic.
Using technology like sentiment analysis we get insight into how people perceive something. This is usually done using the natural language processing part of a body of work we tend to generically call machine learning. Sentiment analysis has become key for many industries to figure out what people think and to then improve the experience with products and address any branding concerns or promote positive branding when possible.
While natural language techniques in computing began all the way back in the 1950s with Alan Turing’s amazing article, Computing Machinery and Intelligence. The philosophies, models, articles, and software grew steadily into a body of work over the course of the next few decades.
The explosion of data in the last twenty years then became impossible to analyze manually. Suddenly there were much broader uses for the techniques already under development and we could isolate and classify content and even score feelings, which is where sentiment analysis comes into play.
How Sentiment Analysis Works
Sentiments are subjective and usually refer to opinions, and emotions. This means we’re putting a quantifiable overlay on top of otherwise qualitative data. Still, these aren’t facts. For example, the word “bad” might be considered to have a negative polarity but “that’s not bad” might be considered a bit more neutral.
Because sentiment analysis has grown as a discipline over the years, different branches have different needs and so a variety of strategies to identify exactly what sentiments are contained in text being analyzed. Subjectivity/Objectivity Identification involves classifying sentences into a subjective or objective grouping. Feature or Aspect-Based is another branch of sentiment analysis that isolates what opinions are related to an entity and so picks up on nuance rather than just isolating a score. Neither are simple, but scoring Subjectivity/Objectivity is much more common.
The uses of sentiment analysis are becoming more ubiquitous. Once considered ground-breaking, these days with frameworks available for popular programming languages and example code spread across a variety of social coding websites, organizations can get started relatively quickly. While the implementation can be quick, removing unnecessary information or training code to interpret information in a way specific to a need can be fairly time consuming. This is usually referred to as training a machine learning model.
The more we train our model, the more significant our findings, or insights can be. This means we’re going from a generic lexicon of words and their score to a more specialized analysis. This isn’t necessarily required as we can easily quantify opinions and accept a bell-curve type of approach where the typical response becomes a baseline. This allows us to quickly start tracking the normal reactions and comparing others against that. But over time most organizations begin partitioning that information and looking for specialized data to train.
Models can take a long time to get accurate. A well-trained model for a given industry can be more valuable than a company that creates the model – one of the many reasons investors look for machine learning projects to get started early.
A Quick Sentiment Analysis Win
Let’s look at one example of quickly getting started using sentiment analysis and how that use case might expand to include training a model. Let’s say a television company meets with people in focus groups to ascertain their thoughts on new television shows. They record interviews and transcribe the findings. That data is qualitative in nature. We can export the data and then analyze the sentiment of each interaction generically using a simple process like the one I wrote up at https://krypted.com/ux-research/batch-process-sentiment-analysis-for-ux-research-studies/. This is a script available to anyone to use for free.
That company doing the research caters to an industry that uses some pretty specialized words to describe things. As do most companies. In the above project, there is a data.json file. In that file, we can define a blurb of text and then identify whether it’s positive or negative, giving it a polarity score. This impacts the scoring of future interviews by altering the way the algorithm interprets specific bits of text and can be used to then get a score with a higher confidence interval.
Drawbacks of Sentiment Analysis
Machine learning isn’t perfect. Vannevar Bush described what would become computers as devices that supplement humans but don’t replace us in his 1945 Atlantic article “As We May Think” and that article inspired many of the original pioneers in computing. Sentiment analysis is similar; it helps guide us in research and helps us process findings faster, but does not replace human interpretation. Quantitative and qualitative review of data should always overlay one another. As with most research findings, interpretation is key. And that interpretation is often represented in the model for the machine learning project (e.g. the data.json file in the earlier example0.
There will always be subtleties that machines can’t pick up on. We can analyze more data with the help of machine learning, though. Sentiment analysis can also be dangerous if we don’t train the model. Using a baseline is fine but requires a lot of review to see how accurate the machines findings are. The better training, the less human intervention required but the cost goes up so exponentially for each percentage point of accuracy it often becomes less expensive to hire larger teams of humans to analyze data. However, poorly developed models can lead us to inaccurate findings and so bad decisions.
Another concern with sentiment analysis is that humans create models to do machine learning. There are a lot of tools available and each is as good as the person who wrote it and the similarity to the use case the tool is being applied to. Test multiple tools and models to find one that works well.
Finally, clean analysis requires clean data. Data often needs to be consistent and the humans that put data into databases or forums don’t always have that consistency in mind. Errors are always a concern with machine learning but more specifically when there’s bad data. Most of the time I’ve spent on machine learning projects has been to get data into a consistent and analyzable fashion. The more structured the data we’re working with, the easier that process and the quicker we’re able to develop solutions.
But despite these drawbacks, machine learning is likely to be an integral, if not one of the most important parts of any organization as time goes on. The ability to leverage deep knowledge as a means to interpret data faster is where we realize the true potential of many an organization.
Advantages of using Sentiment Analysis
We can learn a lot when we have a machine help synthesize our data. Using sentiment analysis, we can quickly see how customers feel about granular areas of an organization much, much faster than manually combing through our databases or just reading through a whole lot of posts on the Internet or forums. The more reactions to our products out there, the harder it is to find how people really think about things, much less analyze that data.
Building tiny robots to help will get us further, faster. We can then drill down into very specific questions we need answered by combining sentiment analysis with recommender and/or classifier machine learning projects. And to do so without bias (unless we amplify our bias in our models). Findings and uses are far ranging and in some cases the value of the machine learning tools are more than the value of entire companies.
Sentiment Analysis Implementation Strategies
One of the hardest aspects of any machine language project is getting started. But then a lightbulb goes off and we scrap our initial forays into a subject and start over. But a little really basic guidance can go a long way.
Most software involves people entering information into a system and then calling that information back up at a later time. Seems pretty simple at first. When they give us information we commit that information into a database. Then we display it in a screen, modal, or dialog later.
These are the two main places where we can process data. Over the years I’ve found each machine learning information processing technique to be implemented at a different time when users work with software.
Sentiment analysis can easily be determined when the user is giving us data. The project we linked to earlier uses an export of the data but we probably already perform some sanity checking on data entered into our software, prior to committing the data into a record (or a bunch of records). At the time we are committing data, we can quickly process it. This could be a simple micro service that simply calculates the polarity score and then adds that into an additional field or column for the record in the database. When we then display the data, we can also display the polarity score (and go from a range of -10 to 10 to a range of 0 to 10 or 0 to 100).
We also have a lot of options for how to surface our findings. The color of the field can change based on the score. For that matter, the whole screen or modal that displays the record can change as well. Or we could pop up another screen that warns that the sentiment is bad. Or we can display an emoji. We can pretty much go anywhere we want.
Another common machine learning technique is looking for similar items.
Recommending Tags and Categories
Another quick win that’s easy to explain is finding items that are similar to other items. This is most commonly done using what is known as K nearest neighbors, a simple algorithm developed in the 70s that converts strings into vectors then maps those and looks for similar items based on distance. As with sentiment analysis, understanding the statistics is not required, as it’s mostly handled by the libraries we import into our projects.
Here, we can import basic libraries in python to process the data. The ntlk (natural language training kit), pandas, matplotlib, tensor flow, and sklearn libraries are great for these types of operations. Ntlk can be used by itself as there’s a tag library, which can be seen in this project I put together to showcase it:
https://github.com/krypted/lightweighttagger. And since we’re training a model with sklearn, https://github.com/krypted/lightweightcategorizer shows how to do that and then process how far neighbors are.
K-Nearest Neighbors Implementation Strategies
Now that we see how easy it is to predict tags and categories, let’s look at how we might implement these into our software. We calculated sentiment at commit time. Tags are often preemptive. So we might want to derive recommended tags when we present a tagging modal. As users type, we might want to present a list that tries to predict based on the letters they have typed as well, so we may keep an array of tags we think are appropriate and then present only the ones that match the data being typed. If we are working with a list of static, predefined tags, then this is straight-forward, just capture the tags when entered. Just don’t forget to train the model when new tags are created.
Free-form tags are a bit different. Users can enter new tags as they go, so we’ll constantly be updating our model and we need additional layers of learning to aid in making that faster. This points to the need for more of a deep-learning option. At this point’s best to look into machine learning libraries or services available for the languages used in any given project.
Categorization is a bit easier than tags. This is because many products have a single category (or at least a smaller number of categories) per record. We change categories less and the closest single neighbor to other objects with a given category are usually more accurate. Categories, as with sentiment, can usually be calculated at the time that a record is committed. We should plan on users entering tags as they go, though, and so account for them in all screens.
We’ve really only covered a few of the dozens of forks in machine learning in this article. Go a little further with 7 Python Machine Learning Modules To Get Acquainted With and do a little research for the specific stack of software in use for a given project. But if we’re not using python anywhere else we don’t have to suddenly become a polyglot-type of product.
Let’s say we’re a Java shop and that whole polyglot thing is for the birds. Well, there are plenty of options there as well. This article provides a dozen or so options for Java libraries: 12+ Machine Learning Libraries For Java Projects. And then there are tons of other languages. So whether using Swift, Ruby, Go, or even COBOL, there’s probably a wealth of information on how to quickly import libraries and implement basic options in software projects.
This is not meant to be a comprehensive strategy document, but instead to get developers and product managers started in thinking about how to implement machine learning. Ultimately, each piece of software is unique because developers are trying to do something new and different. Or so we hope.
The various options available can help us go far beyond simply storing and recalling information pertinent to a given cohort and into processing that information to truly aid in a human and computer symbiosis. This allows us to predict, diagnose, and detect anomalies and commonalities in our data in ways that given our specific knowledge of what that data represents will benefit users, saving time, surfacing information faster, and helping people perform in ways that are unparalleled.