New Evaluation Framework

We used a new Evaluation Framework for our latest Product Evaluation Report, which is about Salesforce Service Cloud. We introduced the new Framework to make our reports shorter and more easily actionable. Shorter for sure, our previous report on Service Cloud was 57 pages including illustrations. This one is 22 pages including illustrations, shorter by more than 60 percent!

We don’t yet know whether the Report is more easily actionable. It was just published. But, our approach to its writing was to minimize descriptions and to bring to the front our most salient analyses, conclusions, and recommendations.

Why?

Our Product Evaluation Reports had become increasingly valuable but to fewer readers. Business analysts facing a product selection decision, analysts for bankers and venture capitalists considering an investment decision, and suppliers’ competitive intelligence staff keeping up with the industry have always appreciated the reports, especially their depth and detail.

However, suppliers, whose products were the subjects of the reports, complained about their length and depth. Requests for more time to review the reports have become the norm, extending our publishing cycle. Then, when we finally get their responses, we’d see heavy commenting at the beginning of the reports but light commenting and no commenting at the end, as if they lost interest. Our editors have made the same complaints.

More significantly, readership, actually reading in general, is way down. Fewer people read…anything. These days, people want information in very small bites. Getting personal, for example, I loved Ron Chernow’s 800-page Hamilton, but I have spoken to so many who told me that it was too long. They couldn’t get through it and put it down unfinished, or, more typically, they wouldn’t even start it. I’m by no means comparing my Product Evaluation Reports to this masterpiece about American history. I’m just trying to emphasize the point.

Shorter Reports, No Less Research

While the Product Evaluation Report on Salesforce Service Cloud was 60 percent shorter, our research to write it was the same as our research for those previous, much longer Product Evaluation Reports. Our approach to research still has these elements, listed in order of increasing importance:

  • Supplier presentations and demonstrations
  • Supplier web content: web site, user and developer communities
  • Supplier SEC filings, especially Forms 10Q and 10K
  • Patent documentation, if appropriate
  • Product documentation, the manuals for administrators, users, and developers
  • Product trial

Product documentation and product trial are the most important research elements and we spend most of our research time in these two areas. Product documentation, the “manuals” for administrators, users, and developers provides complete, actual, accurate, and spin-less descriptions of how to setup and configure a product, of what a product does—its services and data, and of how it works. Product trials give us the opportunity to put our hands on a product and try it out for customer service tasks.

What’s In?

The new Framework has these four top-level evaluation criteria:

  • Customer Service Apps list and identify the key capabilities of the apps included in or, via features and/or add-ons, added to a customer service software product.
  • Channels, Devices, Languages list supported assisted-service and self-service channels, devices attachable to those channels, and languages that agents and customers may use to access the customer service apps on those devices.
  • Reporting examines the facilities to measure and present information about a product’s usage, performance, effectiveness, and efficiency. Analysts use this information continually to refine their customer service product deployments.
  • Product, Supplier, Offer. Product examines the history, release cycle, development plans, and customer base for a customer service product. They’re the factors that determine product viability. Supplier examines the factors that determine the supplier’s viability. Offer examines the supplier’s markets for the product and the product’s packaging and pricing.

This is the information that we use to evaluate a customer service product.

What’s Missing?

Technology descriptions and their finely granular analyses are out. For example, the new reports do not include tables listing and describing the attributes/fields of the data models for key customer service objects/records like cases and knowledge items or listing and describing the services that products provide for operating on those data models to perform customer service tasks. The new reports do not present analyses of individual data model attributes or individual services, either. Rather, the reports present a coarsely granular analysis of data models and services with a focus on strengths, limitations, and differentiators. We explain why data models might be rich and flexible or we identify important, missing types, attributes, and relationships then summarize the details that support our analysis.

“Customer Service Technologies” comprised more than half the evaluation criteria of the previous Framework and two thirds of the content of our previous Framework-based reports. These criteria described and analyzed case management, knowledge management, findability, integration, and reporting and analysis. For example, within case management, we examined case model, case management service, case sources, and case management tools. They’re out in the new version and they’re the reason the reports are shorter. But, they’re they basis of our analysis of the Customer Service Apps criterion. If a product has a rich case model and a large set of case management services, then rich case model and large set of case management services will be listed among the case management apps key capabilities in our Customer Services Apps Table and we’ll explain why we listed them in the analysis following the Table. On the other hand, if a product’s case model is limited, then case model will be absent from the Table’s list of key capabilities and we’ll call out the limitations in our analysis. Just a reminder, our bases for the evaluation of the Customer Service Apps criteria, the subcriteria of Technologies for the old Framework are shown in the Table below:

Slide1Table 1. We present the bases for the evaluation of the Customer Service App criteria in this Table.

Trustworthy Analysis

We had always felt that we had to demonstrate that we understood a technology to justify our analysis of that technology. We had also felt that you wanted and needed our analysis of all of that technology at the detailed level of every individual data attribute and service. You have taught us that you’d prefer higher-level analyses and low-level detail only to understand the most salient strengths, limitations, and differentiators.

The lesson that we’ve learned from you can be found in a new generation of Product Evaluation Reports. Take a look at our latest Report, our evaluation of Salesforce Service Cloud and let us know if we’ve truly learned that lesson.

Remember, though, if you need more detail, then ask us for it. We’ve done the research.

Virtual Assistant Update

 

We recently published “Virtual Assistant Update.” It’s a broad and not too deep update on virtual assistant technologies, products, suppliers, and markets from the perspective of the five leading suppliers: [24]7, Creative Virtual, IBM, Next IT, and Nuance. These are the leaders because they:

  • Have been in the virtual assistant business for some time (from 16 years for [24]7 via its acquisition of IntelliResponse to four years for IBM).
  • Have attractive and useful virtual assistant technology
  • Offer virtual assistant products that are widely used and well proven.
  • Want to be in the virtual assistant business and have company plans and product plans to continue.

The five suppliers are quite diverse. There’s the public $80 billion IBM and the public $2 billion Nuance. Then there are the private [24]7, a venture backed company big on acquisitions and the more closely held Creative Virtual and Next IT. Despite these big corporate-level differences, the five’s virtual assistant businesses are quite similar. Roughly they’re all about same size and the five compete as equals to acquire and retain virtual assistant business.

By the way, across the past 12 to 24 months, business has been good for all of the five suppliers. Customer growth has been very good across the board. Our suppliers have expanded into new markets and have introduced new and/or improved products.

Natural Language Processing and Machine Learning

Technologies are quite similar, too. All five have built their virtual assistant offerings with the same core technologies: Natural Language Processing (NLP) and machine learning.

Virtual Assistants use NLP to recognize intents of customer requests. NLP implementations usually comprise an engine that processes customer requests using an assortment of algorithms to parse and understand the words and phrases in a customer’s request. An NLP engine’s processing is guided by customizable and/or configurable deployment-specific mechanisms such as language models, grammars, and rules. These mechanisms accommodate the vocabularies of a deployment’s business, products, and customers.

Virtual assistants use machine learning technology to match actual customer requests with anticipated customer requests and then to select the content or execute the logic associated with the anticipated requests. (Machine learning algorithms learn from and then make predictions on data. Algorithms learn from training. Analysts/scientists train them with sample, example, or typical deployment-specific input then with feedback or supervision on correct and incorrect predictions. A trained algorithm is a deployment-specific machine learning model. The accuracy of models can improve with additional and continuing training. Some machine learning implementations are self-learning.)

Complex and Sophisticated Work: Consultant-led or Consultant-assisted

The work to adapt NLP and machine learning technology implementations for virtual assistant deployments is sophisticated and complex. This is work for experts: scientists, analysts, and developers in languages, data, and algorithms. The approach to this is work differentiates virtual assistant suppliers and products. The approach drives virtual assistant product selection. Here’s what we mean.

All the virtual assistant suppliers have built tools and package predefined resources to make the work simpler, faster, and more consistent. Some suppliers have built tools for the experts and these suppliers have also built consulting organizations with the expertise to use their tools. Successful deployments of their virtual assistant offerings are consultant-led. They require the services of the suppliers’ (or the suppliers’ partners’) consulting organizations.

Some suppliers have built tools that further abstract the work and make it possible for analysts, business users, and IT developers to deploy. While these suppliers have also built consulting organization with expertise in virtual assistant technologies and in their tools, successful deployments of their virtual assistant offerings are consultant-assisted and may even approach self-service.

So, a key factor in the selection of a virtual assistant product is deployment approach: consultant-led or consultant-assisted. Creative Virtual, Next IT, and Nuance offer consultant-led virtual assistant deployments. [24]7 and IBM offer consultant-assisted deployments. For example, IBM Watson Virtual Agent includes tools that make it easy to deploy virtual assistants. In the Figure below, we show the workspace wherein analysts specify the virtual assistant’s response to the customer request to make a payment. Note that the possible responses leverage content, tool, and facilities packaged with the product.

ibm watson va illos

© 2017 IBM Corporation

Illustration 7. This Illustration shows the Watson Virtual Agent workspace for specifying responses from the bot/virtual assistant.

 

Which is the better approach? Consultant-assisted is our preference, but we’ve learned over our long years of research and consulting that deployment approach is a function of corporate, style, personality, and culture. Some businesses and organizations give consultants the responsibility for initial and ongoing technology deployments. Some businesses want to do it themselves. For virtual assistant software, corporate style could very well be a key factor in product selection.

 

 

 

 

Microsoft Dynamics 365 for Customer Service

Serious Customer Service Capabilities

In our more than 10 years of customer service research, publishing, and consulting, we’d never before published a report about a Microsoft offering. It’s not because Microsoft hasn’t had a customer service offering or that the company hasn’t had success in business applications. Since 2003, its CRM suite has always included a customer service app. And, its Dynamics CRM brand has built a customer base of tens of thousands of accounts and millions of users. But, Dynamics CRM had always been more about its sales app and that app’s integration with Office and Outlook. Customer service capabilities have been a bit limited. No longer.

Beginning in November 2015, the improvements in two new releases—CRM 2016 and CRM 2016 Update 1—and, in November 2016, the introduction of the new Dynamics 365 brand have strengthened, even transformed, Microsoft’s customer service app and have made Microsoft a player to consider in the high end of the customer service space.

Our Product Evaluation Report on Microsoft Dynamics 365 for Customer Service, published December 1, 2016, will help that consideration. These are the new and/or significantly improved customer service components:

  • Knowledge management
  • Search
  • Customer service UI
  • Web self-service and communities
  • Social customer service

Let’s take a closer but brief look at each of them.

Knowledge Management

Knowledge Management is the name of a new customer service component. Introduced with CRM 2016, it’s a comprehensive knowledge management system with a rich and flexible knowledge model, a large set of useful knowledge management services, and an easy to learn and easy to use toolset. The best features of Knowledge Management are:

  • Visual tools of Interactive Service Hub, the customer service UI
  • Knowledge lifecycle and business processes that implement and support the lifecycle
  • Language support and translation
  • Version control
  • Roles for knowledge authors, owners, and managers

For example, Knowledge Management comes with a predefined but configurable knowledge lifecycle with Author, Review, Publish, and Expire phases. The screen shot in Figure 1 shows the steps in the Author phase.

ish-knowledge-author-stage-stepsFigure 1. This screen shot shows the steps in the Author phase of the knowledge management process.

Note that Knowledge Management is based on technology from Parature, a Reston, VA-based supplier with a customer service offering of the same name that Microsoft acquired in 2014. Beginning with the introduction of Dynamics 365, Microsoft no longer offers the Parature customer service product.

Search

Search is not a strength of Dynamics 365. Search sources are limited. Search query syntax is simple. There are few search analyses and few facilities for search results management. However, with the Dynamics 365 rebranding Microsoft has made improvements. Categorized Search, the new name of the search facility in Dynamics 365, retrieves database records with fields that begin with the words in search queries and lets administrators and seekers facet (Categorize) search results. The new Relevance Search adds relevance and stemming analyses. Microsoft still has work to do, but faceting, stemming, and relevance are a start to address limitations.

Customer Service UI – Interactive Service Hub

Interactive Service Hub (ISH) provides several useful and very attractive capabilities in Dynamics 365. It’s the UI for Knowledge Management, one of two UIs for case management, and a facility for creating and presenting dashboards. For the case management and knowledge management UIs, ISH provides visual tools that are easy to learn and easy to use. The tools let agents perform every case management task and let authors and editors perform every knowledge management function. For example, Figure 2 shows a screen shot of ISH’s presentation of an existing Case—the Name of the Case at the top left, the Case information to display “SUMMARY | DETAILS | CASE RELATIONSHIPS | SLA” under the Name, the phases of the deployment’s case management process “IDENTIFY QUALIFY RESEARCH RESOLVE” within a ribbon near the top of the screen, and the (SUMMARY) Case information in the center.

ish-existing-caseFigure 2. This screen shot shows the Interactive Service Hub display of an existing Case.

In addition to tools for building dashboards, ISH also packages useful predefined dashboards, two for case management and two for knowledge management. The four help customer service managers and agents and knowledge management authors and editors manage their work. Figure 3 shows an example of the My Knowledge Dashboard. It presents information useful to authors and editors very visually and interactively.

my-knowledge-dashboardFigure 3. This screen shot shows an example of the My Knowledge Dashboard.

Web Self-service and Communities

We were quite surprised to learn that, prior to the May 2016 introduction of CRM 2016 Update 1, Dynamics 365 for Customer Service and all of its predecessor products did not include facilities for building and deploying web self-service or communities sites. This limitation was addressed in Update 1 with the then named CRM Portal service, renamed the Portal service in Dynamics 365. Portal service is a template-based toolkit for developing (web development skills are required) and deploying browser-based web self-service and communities/forums sites. It’s based on technology from Adxstudio, which Microsoft acquired in September 2015 and it packages templates for a Customer Service Portal and a Community Portal. Note that Dynamics 365 for Customer Service licenses include one million page views per month for runtime usage of sites built on the Portal service (licenses may be extended with additional page views per month).

Social Customer Service

Microsoft Social Engagement is a separately packaged and separately priced social customer service offering that Microsoft introduced early in 2015. Social Engagement provides facilities that listen for social posts across a wide range of social sources (Instagram, Tumblr, WordPress, and YouTube as well as Facebook and Twitter), that analyze the content and sentiment of those posts, and that interact with social posters. In addition, Social Engagement integrates with Dynamics 365 for Customer Service. Through this integration, the automated or manual analysis of social posts can result in creating and managing customer service Cases. It’s a strong social customer service offering. What’s new is Microsoft bundles Social Engagement with Dynamics 365 for Customer Service. That’s a very big value add.

All This and More

We’ve discussed the most significant new and improved capabilities of Dynamics 365 for Customer Service. Knowledge Management, Interactive Service Hub, improved Search, the Portal service, and bundled Social Engagement certainly strengthen the offering. Although not quite as significant, Microsoft added and improved many other capabilities, too. For example, there are language support improvements, improvements to integration with external apps, new Customer Survey and “Voice of the Customer” feedback capabilities, and the use of Azure ML (Machine Learning) to suggest Knowledge Management Articles as Case resolutions automatically based on Case attribute values. Bottom line, Microsoft Dynamics 365 for Customer Service deserves serious consideration as the key customer service app for large businesses and public sector organizations, especially those that are already Microsoft shops.

Evaluating Customer Service Products

Framework-based, In-depth Product Evaluation Reports

We recently published our Product Evaluation Report on Desk.com, Salesforce’s customer service offering for small and mid-sized businesses. “Desk” is a very attractive offering with broad and deep capabilities. It earns good grades on our Customer Service Report Card, including Exceeds Requirements grades in Knowledge Management, Customer Service Integration, and Company Viability.

We’re confident that this report provides input and guidance to analysts in their efforts to evaluate, compare, and select those customer service products, and we know that it provides product assessment and product planning input for its product managers. Technology analysts and product managers are the primary audiences for our reports. We research and write to help exactly these roles. Like all of our Product Evaluation Reports about customer service products that include multiple apps—case management, knowledge management, web self-service, communities, and social customer service—it’s a big report, more than 60 pages.

Big is good. It’s their depth and detail that makes them so. Our research for them always includes studying a product’s licensed admin, user, and, when accessible, developer documentation, the manuals or online help files that come with a product. We read the patents or patent applications that are a product’s technology foundation. Whenever offered, we deploy and use the products. (We took the free 30-day trial of Desk.) We’ll watch suppliers’ demonstrations, but we rely on the actual product and its underlying technologies.

On the other hand, we’ve recently been hearing from some, especially product marketers when they’re charged to review report drafts (We never publish without the supplier’s review.), that the reports are too big. Okay. Point taken. Perhaps, tt is time to update our Product Evaluation Framework, the report outline, to produce shorter, more actionable reports, reports with no less depth and detail but reports with less descriptive content and more salient analytic content. It’s also time to tighten up our content.

Product Evaluation Reports Have Two Main Parts

Our Product Evaluation Reports have had two main parts: Customer Service Best Fit and Customer Service Technologies. Customer Service Best fit “presents information and analysis that classifies and describes customer service software products…speed(ing) evaluation and selection by presenting easy to evaluate characteristics that can quickly qualify an offering.” Customer Service Technologies examine the implementations of a product’s customer service applications and their foundation technologies as well as its integration and reporting and analysis capabilities. Here’s the reports’ depth and detail (and most of the content). Going forward, we’ll continue with this organization.

Streamlining Customer Service Best Fit

We will revamp and streamline Customer Best Fit, improving naming and emphasizing checklists. The section will now have this organization:

  • Applications, Channels, Devices, Languages
  • Packaging and Licensing
  • Supplier and Product
  • Best Prospects and Sample Customers
  • Competitors

Applications, Channels, Devices, Languages are lists of key product characteristics, characteristics that quickly qualify a product for deeper consideration. More specifically, applications are the sets of customer service capabilities “in the box” with the product—case management, knowledge management, and social customer service, for example. Channels are assisted-service, self-service, and social. We list apps within supported channels to show how what’s in the box may be deployed. Devices are the browsers and mobile devices the product supports for internal users and for end customers. Languages are two lists: one for the languages in which the product deploys and supports for its administration and internal users and one for the languages it supports for end customers.

Packaging and Licensing presents how the supplier offers the product, the fees that it charges for the offerings, and the consulting services available and/or necessary to help licensees deploy the offerings.

 Supplier and Product present high level assessments of the supplier’s and the product’s viability. For the supplier, we present history, ownership, staffing, financial performance, and customer growth. For the product, we present history, current development approach, release cycle, and future plans.

Best Prospects and Sample Customers are lists of the target markets for the product—the industries, business sizes, and geographies wherein the product best fits. This section also contains the current customer base for the product, a list of typical/sample customers within those target markets and, if possible, presents screen shots of their deployments.

 Competition lists the product’s closest competitors, its best alternatives. We’ll also include a bit of analysis explaining what make them the best alternatives and where the subject product has differentiators.

Tightening-up Customer Service Technologies

Customer Service Technologies is our key value-add and most significant differentiator of our Product Evaluation Reports. It’s why you should read our reports, but, as we mentioned, it’s also the main reason why they’re big.

We’ve spent years developing and refining the criteria of our Evaluation Framework. They criteria are the results of continuing work with customer service products and technologies and our complementary work the people who are product’s prospects, licensees, suppliers, and competitors. We’re confident that we evaluate the technologies of customer service products by the most important, relevant, and actionable criteria. Our approach creates common, supplier-independent and product-independent analyses. These analyses enable the evaluation and comparison of similar customer service products and results in faster and lower risk selection of a product that best fits a set of requirements.

However, we have noticed that the descriptive content that are the bases for our analyses has gotten a bit lengthy and repetitive (repeating information in Customer Best Fit). We plan to tighten up Customer Service Technologies content and analysis in these ways:

  • Tables
  • Focused Evaluation Criteria
  • Consistent Analysis
  • Reporting

Too much narrative and analysis has crept into Tables. We’ll make sure that Tables are bulleted lists with little narrative and no analysis.

Evaluation criteria have become too broad. We’ve been including detailed descriptions and analyses of related and supported resources along with resources that’s the focus of the evaluation. For example, when we describe and analyze the details of a case model, we’ll not also describe and analyze the details of user and customer models. Rather we’ll just describe the relationship between the resources.

Our analyses will have three sections. The first will summarize what’s best about a product. The second will present additional description and analysis where Table content needs further examination. The third will be “Room for Improvement,” areas where the product is limited. This approach will make the reports more actionable and more readable as well as shorter.

In reporting, we’ll stop examining instrumentation, the collection and logging of the data that serves as report input. The presence (or absence) of reports about the usage and performance of customer service resources is really what matters. So, we’ll call the criterion “Reporting” and we’ll list the predefined reports packaged with a product in a Table. We’ll discuss missing reports and issues in instrumentation in our analysis.

Going Forward

Our Product Evaluation Report about Microsoft Dynamics CRM Online Service will be the first to be written on the streamlined Framework. Expect it in the next several weeks. Its Customer Service Best Fit section really is smaller. Each of its Customer Service Technologies sections is smaller, too, more readable and more actionable as well.

Here’s the graphic of our Product Evaluation Framework, reflecting the changes that we’ve described in this post.

Slide1

Please let us know if these changes make sense to you and please let us know if the new versions of the Product Evaluation Reports that leverage them really are more readable and more actionable.

Who You Gonna Call?

Apologies to Ray Parker Jr. While your question or a problem may not be about ridding your neighborhood of ghosts, “Who You Gonna Call” to get the answer or solution that you need?

Getting help on the Internet or on your mobile device is easy—type it into the search box of your favorite Internet search engine or ask Siri, now Alexa and Cortana, too, but it’s not always easy to get an answer or a solution to complex, detailed, or involved questions and problems. Who You Gonna Call with those?

Questions and Problems

Over the past several months, your blogger has had quite few questions and problems for which answers and solutions were not so easy to find. Here are some of them:

  • My Whirlpool electric dryer doesn’t heat (or maybe it overheats before it doesn’t heat).
  • My Toro gasoline powered lawnmower is hard to start and stalls when it does start.
  • My new iPhone 6s doesn’t pair with the Bluetooth audio system in my car.
  • Which should I buy: an electric induction cooktop, a standard electric cooktop, or a natural gas cooktop?

DIY Answers and Solutions

Getting answers and solutions to these question and problems involves getting your hands dirty, literally or figuratively. These questions and problems are about what things do, how things are put together/assembled, and the way that things work. I want the inside information that I can use to get to explain the answers and apply the fixes myself. I’m a DIY (Do It Yourself) kind of person, a DIYer. I’m willing and eager and I have tools. I enjoy the challenge and I revel in the satisfaction of getting the answers or fixing the problems myself. I’m not looking for a pro to do the work for me for a fee.

So who was your blogger gonna call to get answers and fixes to the list of questions and problems? Let’s take a look at these possibilities:

  • Social networks
  • Communities and forums
  • YouTube
  • Brand sites
  • Build and repair sites

Social Networks

Crowd sourcing answers and fixes from the members of might not work for these kinds of questions and problems. While many of my friends and followers are DIY kind of people, too, the most I expect from a crowd-sourced approach is a reference to a web site or to an expert. Very helpful to be sure, but a step removed from what I need.

Communities and Forums

Communities and forums let members post questions and problems within topics in the hopes that other community members will reply with comments that contain answers and solutions. There are two types of communities and forums. Communities of the first type are hosted and moderated by the brand about which customers ask questions or pose problems and receive answers and solutions from other customer as well as from subject matter experts (SMEs) who may also be customers or may be on the brand’s customer service staff. These communities can be very helpful, especially so when the brand’s employees monitor and moderate customers’ questions and problems. Brand employee participation ensures correct answers and solutions. They’re not so helpful when their answers and solutions lack detail or when their topics do not include the subjects of questions and problems. We’ve seen communities for ISVs that seem only to suggest consulting services as answers and solutions. We’ve seen communities with topics only about making suggestions for product or service improvements or only about customer experience with a brand.

The second type of community or forum is hosted and managed independently of the brand that is the subject of its topics. Posts on these communities commonly contain complex, detailed, technical questions and problems. Comments frequently contain exactly the answers and solutions in the level of detail that DIYers crave. On the other hand, many of these communities have no moderation or monitoring by SMEs. They exercise no control over comments. For example, below is a post from acuraworld.com that accurately represents my question about Bluetooth pairing a new iPhone. The comment contains an unmoderated and unappealing answer.

iphone acura bluetooth

© 2016 Acuraworld

Perhaps this answer does solve the problem, but I would never “Reset All Settings” on my iPhone to solve it. A better answer lists the steps to establish a new pairing in the car, a pain for sure because voice tags are phone-specific in my car’s system. Be careful with communities and forums.

YouTube

YouTube has a huge library of DIY videos. Find the video that answers your question or solves your problem by searching within the site. YouTube’s videos are posted by brands, by repair pros, and DIYers. YouTube does not monitor or moderate their content. So, DIYer beware. Be careful of whose advice you take.

A YouTube DIYer video, https://www.youtube.com/watch?v=0Ni-rdRyxA0, contained the fix to my starting/stalling lawnmower problem. I found it after searching independent communities for the problem symptom and learning that my problem was somewhere in the lawnmower’s fuel system, likely the carburetor. Note that Toro.com, the brand site for my lawnmower, was similar to Whirpool.com, offering downloads of product manuals.

Brand Sites

Brand web sites of brands may contain the level of information that answers detailed questions or that fixes problems with their products. For my dryer problem, I went to whirlpool.com, clicked the Owners tab, and clicked the Support tab to get to this site:

whirlpool support

© 2016 Whirlpool

I followed the Manuals tab/Find Manuals link then enter the Model number. For my model, Whirlpool provides three downloads:

  • Owners Manual
  • Installation Instructions
  • Parts List

The Owners Manual is a “Use and Care Guide.” Its content is not model-specific or even specific to dryer type—electric or gas. It does contain an If You Need Assistance or Service section that provides some high-level troubleshooting information as well as telephone numbers and mailing addresses (It’s an old dryer.). The Parts List contains numbered schematics and corresponding lists of parts numbers and brief description or names of every single part of the dryer. This information is essential to every fix because every fix usually requires replacement of broken parts and parts numbers are the mechanism for their identification. The Parts List manual also provides an idea of how the dryer is assembled and of how it works. The heating element, thermostats, and fuses are the causes and effects of not heating and overheating. These are parts are numbers 6, 7, 8, 15, and 17 on the schematic for Bulkhead Parts shown below.

whirlpool parts

© 2016 Whirlpool

Looking at the schematic, it’s difficult to visualize an assembled dryer and the locations of and accesses to the heating element, thermostats, and fuses. Mechanical/electrical aptitude and actual repair experience are required for that. You’ll have them after a single repair, but don’t call a pro yet. More online help is available.

Repair and Parts Sites

Repair and parts sites are exactly that online help. My fav is repairclinic.com. Go there, enter your model number and you’ll see an extremely helpful page like this:

repair clinic 2

© 2016 RepairClinic.com, Inc

In addition to a list of parts with pictures and descriptions, Repair Clinic also provides a list of Common Problems on the left of the page. Click “Dryer overheating” to reach this page:

repairclininc

© 2016 RepairClinic.com, Inc

Now the fix is very close. This page contains everything you’ll need to understand how dryers work, how/why they break, diagnose and verify the problem, identify the part causing the problem, and order the part to fix the problem. The ordered list of likely causes with descriptions and videos is especially helpful. I love this site. It contains similar information for lawn equipment, heating and cooling, and power tools as well as appliances. But, repairclinic.com is not the only site that provides diagnostics and parts for fixing these types of problems.

SearsPartsDirect.com contains information similar to RepairClinic.com and not just for Sears’ products. ThisOldHouse.com the web site for the long-running PBS series contains a wealth of answers and solutions to a wide range of home improvement and repair questions, problems, and projects. Answers and solutions are easy to understand videos presented by the show’s experts. The video library is continually growing.

Cooktops

The last item in my list is a product research question about cooktops and a tax question about annuities. Regarding cooktops, I was ready to replace my 30 something year old electric cooktop with a gas cooktop. My product research started, as it usually does on ConsumerReports.org. It’s a subscription site. I’ve been a subscriber and a member for many years. First, I looked at the Buying Guide for cooktops where I learned about electric induction cooktops. The description and analysis changed my mind about gas. Then I went to product ratings of electric induction cooktop products. Consumer Reports rated GE Profile products highly and my wife and I have been very happy with the other GE Profile appliances in our kitchen. That’s what we bought. Of course, I installed it.

Recommendations

The Internet is a wonderful resource for getting DIY answers and solutions. The challenge for DIYers will be identifying the correct and most usable answers and solutions from a myriad of reasonable possibilities. Who You Gonna Call? Generally, we recommend:

  • Brand sites
  • Moderated and monitored communities
  • Build and repair sites
  • YouTube

More specifically, RepairClininc.com and, especially, ConsumerReports.org are our favorites. Your subscription and membership fees to Consumer Reports will be paid back many times over with the best product research.

 

 

The Helpdesks: Desk.com, Freshdesk, Zendesk

We’ve added our Product Evaluation Report on Freshdesk to our library of in-depth, framework-based reports on customer service software. We put this report on the shelf, so to speak, next to our Product Evaluation Reports on Desk.com and Zendesk. The three products are quite a set. They’re similar in many ways, remarkably so. Here are a few of those similarities:

The products are “helpdesks,” apps designed to provide an organization’s customers (or users) with information and support about the organization’s products and services. Hence, their names are (alphabetically) Desk.com, Freshdesk, and Zendesk.

They have the same sets of customer service apps and those apps have very similar capabilities: case management, knowledge management and community/forum with a self-service web portal and search, social customer service supporting Facebook and Twitter, chat, and telephone/contact center. Case management is the core app and a key strength for all of the products. Each has business rules-based facilities to automate case management tasks. On the other hand, knowledge management and search are pretty basic in all of them.

The three also include reporting capabilities and facilities for integrating external apps. Reporting has limitations in all three. Integration is excellent across the board.

These are products that deploy in the cloud. They support the same browsers and all three also have native apps for Android and iOS devices.

All three are packaged and priced in tiers/levels/editions of functionality. Their licensing is by subscription with monthly, per user license fees.

Simple, easy to learn and easy to use, and cross/multi/omni-channel are the ways that the suppliers position these offerings. Our evaluations were based on trial deployments for each of the three products. We found that all of them support these positioning elements very well.

Small (very small, too) and mid-sized businesses across industries in all geographies are their best fits, although the suppliers would like to move up market. The three products have very large customer bases—somewhere around 30,000 accounts for Desk.com and Zendesk and more than 50,000 accounts for Freshdesk per a claim in August from Freshdesk’s CEO. Note that Desk.com was introduced in 2010, Freshdesk in 2011, and Zendesk in 2004.

Suppliers’ internal development organizations design, build, and maintain the products. All three suppliers have used acquisitions to extend and improve product capabilities.

While the products are similar, the three suppliers are quite different. Salesforce.com, offers Desk.com. Salesforce is a publicly held, San Francisco, CA based, $8 billion corporation founded in 1999. Salesforce has multiple product lines. Freshdesk Inc., offers Freshdesk. It’s a privately held corporation founded in 2010 and based in Chennai, India. Zendesk, Inc. offers Zendesk. This company was founded in 2007 in Denmark and reincorporated in the US in 2009. It’s publicly held and based in San Francisco, CA. Revenues in 2015 were more than $200 million.

These differences—public vs. private, young vs. old(er), large vs. small(er), single product line vs. multiple product line—will certainly influence many selection decisions. However, all three are viable suppliers and all three are leaders in customer service software. The supplier risk in selecting Desk.com, Freshdesk, or Zendesk is small.

Then, where are the differences that result in making a selection decision? The differences are in the ways that the products’ developers have implemented the customer service applications. The differences become clear from actually using the products. Having actually used all three products in our research, we’ve learned the differences and we’ve documented them in our Product Evaluation Reports. Read them to understand the differences and to understand how those differences match your requirements. There’s no best among Desk.com, Freshdesk, and Zendesk but one of them will be best for you.

For example, here’s the summary of Freshdesk evaluation, the grades that the product earned on our Customer Service Report Card. “Freshdesk earns a mixed Report Card—Exceeds Requirements grades in Capabilities, Product Management, Case Management, and Customer Service Integration, Meets Requirements grades in Product Marketing, Supplier Viability, and Social Customer Service, but Needs Improvement grades in Knowledge Management, Findability, and Reporting and Analysis.”

Case Management is where Freshdesk has its most significant differences, differences from its large set of case management services and facilities, its support for case management teams, its automation of case management tasks, and its easy to learn, easy to use case management tools. For example, Arcade is one of Freshdesk’s facilities for supporting case management teams. Arcade is a collection of these three, optional gamification facilities that sets and tracks goals for agents’ customer service activities.

  • Agents earn Points for resolving Tickets in a fast and timely manner and lose points for being late and for having dissatisfied customers, accumulating points toward reaching six predefined skill levels.
  • Arcade lets agents earn “trophies” for monthly Ticket management performance. In addition,
  • Arcade awards bonus points for achieving customer service “Quests” such as forum participation or publishing knowledgebase Solutions.

Arcade lets administrators configure Arcade’s points and skill levels. Its Trophies and Quests have predefined goals; however, administrators can set Quests on or off. The Illustration below shows the workspace that administrators use to configure Points.

arcade points

Freshdesk can be a Customer Service Best Fit for many small and mid-sized organizations. Is it a Best Fit for your? Read our Report to understand why and how.

Nuance Nina Virtual Assistants

We evaluated Nina, the virtual assistant offering from Nuance, for the third time, publishing our Product Evaluation Report on October 29, 2015. This Report covers both Nina Mobile and Nina Web.

Briefly, by way of background, Nina Mobile provides virtual assisted-service on mobile devices. Customers ask questions or request actions of Nina Mobile’s virtual assistants questions by speaking or typing them. Nina Mobile’s virtual assistants deliver answers in text. Nina Mobile was introduced in 2012. We estimate that approximately 15 Nina Mobile-based virtual assistants have been deployed in customer accounts.

Nina Web provides virtual assisted-service through web browsers on PCs and on mobile devices. Customers ask questions or requests actions of Nina Web’s virtual assistants questions by typing them into text boxes. Nina Web’s virtual assistants deliver answers or perform actions in text and/or in speech. Nina Web was introduced as VirtuOz Intelligent Virtual Agent in 2004. Nuance acquired VirtuOz in 2013. We estimate that approximately 35 Nina Web-based virtual assistants have been deployed in customer accounts.

The two products now have common technologies, tools, and a development and deployment platform. That’s a big deal. They had been separate and pretty much independent products, sharing little more than a brand. Nuance’s development team has been busy and productive. Nina also has many new and improved capabilities. Most significant are a new and additional toolset that supports key tasks in initial deployment and ongoing management, PCI (Payment Card Industry) certification, which means that Nina virtual assistants can perform ecommerce tasks for customers, support for additional languages, and packaged integrations with chat applications.

Nina Evaluation Process

We did not include an evaluation of Nina’s Ease of Evaluation. Our work on the Nina Product Evaluation Report was well underway before we added that criterion to our framework. So, we’ll offer that evaluation here.

For our evaluation, we used:

  • Product documentation, which was provided to us by Nuance under an NDA
  • Demonstrations, especially of new tools and functionality, conducted by Nuance product management staff
  • Web content of nuance.com
  • Online content of Nina deployments
  • Nuance’s SEC filings
  • Discussions with Nuance product management and product marketing staff
  • Thorough (and very much appreciated) review of report draft

We also leveraged our knowledge of Nina, knowledge that we acquired in our research for two previously published Product Evaluation Reports from July 2012 and January 2014. We know the product, the underlying technology, and the supplier. So we were able to focus our research on what was new and improved.

Product Documentation

Product documentation, the end user/admin manuals for Nina IQ Studio (NIQS) and the new Nuance Experience Studio (NES) toolsets, was they key source for our research. We found the manuals to be well written and reasonably easy to understand. Samples and examples illustrated simple use cases and supported descriptions very well. Showing more complex use cases, especially for customer/virtual assistant dialogs, would have been very helpful. Personalization facilities could be explained more thoroughly. Also, there’s a bit of inconsistency in terminology between the two toolsets and their documentation.

Nina Deployments

Online content of Nina deployments helped our research significantly. Within the report, we showed two examples of businesses that have licensed and deployed Nina Web are up2drive.com, the online auto loan site for BMW Financial Services NA, LLC and the Swedish language site for Swedbank, Sweden’s largest savings bank. The up2drive Assist box accesses the site’s Nina Web virtual assistant. We asked, “How to I qualify for the lowest rate new car rate?” See the Illustration just below.

up2drive

Online content of Nina Mobile deployments show how virtual assistants can perform actions for customers. For example, we showed how Dom, the Nina Mobile virtual assistant, could help you order pizza from Domino’s in our blog post of May 14, 2015. See https://www.youtube.com/watch?v=noVzvBG0GD0.

Take care when using virtual assistant deployments for evaluation and selection. They’re only as good as the deploying organization wants to make them. Their limitations are almost never the limitations of the virtual assistant software. Every virtual assistant software product that we’ve evaluated has the facilities to implement and deliver excellent customer service experience. Virtual assistant deployments, like all customer experience deployments, are limited by the deploying organization’s investment in them. The level of investment controls which questions they can answer, which actions they can perform, how well they can deal with vague or ambiguous questions and action requests, and their support for dialogs/conversations, personalization, and transactions.

No Trial/Test Drive

Note that Nuance did not provide us with a product trial/test drive of Nina. In fact, Nuance does not offer Nina trials/test drives to anyone. That’s typical of and common for virtual assistant software. Suppliers want easy and fast self-service trials that lead prospects to license their offerings. Virtual assistant software trials are not any of these things. They’re not designed for self-service deployment either for free or for fee.

Why not? Because virtual assistant software is complex. Even its simplest deployment requires building a knowledgebase of the answers to the typical and expected questions that customers ask, using virtual assistant facilities to deal with vague and ambiguous questions, engaging in a dialog/conversation, escalating to chat, or presenting a “no results found” message, for example, and using virtual assistant facilities to perform actions that customers request and deciding how to perform them. (Performing actions will likely require integration apps external to virtual assistant apps.) This is not the stuff of self-service trials and test-drives.

In addition, most virtual assistant suppliers have not yet invested in building tools that speed and simplify the work that organizations must perform for the initial deployment and ongoing management of virtual assistants software even after it has been licensed. Rather, suppliers offer their consulting services instead. (That’s changing for Nuance with toolsets like NES and for several other virtual assistant software suppliers and that’s certainly a topic for a later time.)

Thank You Very Much, Nuance

One more point about Ease of Evaluation. Our research goes into the details of customer service software. We publish in-depth Product Evaluation Reports. We demand a significant commitment from suppliers to support our work. Nuance certainly made that commitment and made Nina Easy to Evaluate for us. We so appreciate Nuance’s support and the time and effort taken by its staff.

Nina was very easy for us to evaluate. The product earns a grade of Exceeds Requirements in Ease of Evaluation.

Zendesk, Customer Service Software That’s Easy to Evaluate

Zendesk Product Evaluation

Zendesk is the customer service offering from Zendesk, Inc. a publicly held, San Francisco, CA based software supplier with 1,000 employees that was founded in 2004. The product provides cloud-based, cross-channel case management, knowledge management, communities and collaboration, and social customer service capabilities across assisted-service, self-service, and social customer service channels.

We evaluated Zendesk against our Evaluation Framework for Customer Service and published our Product Evaluation Report on October 22. Zendesk earned a very good Report Card—Exceeds Requirements grades in Product History and Strategy, Case Management, and Customer Service Integration, and Meets Requirements grades for all other criteria but one, Social Customer Service. Its Needs Improvement grade in Social Customer Service is less an issue with packaged capabilities than it is a requirement for a specialized external app designed for and positioned for wide and deep monitoring of social networks.

Evaluation Framework

Our Evaluation Framework considers an offering’s functionality and implementation, what a product does and how it does it. It also considers the supplier and the supplier’s product marketing (positioning, target markets, packaging and pricing, competition) and product management (release history and cycle, development approach, strategy and plans) for the offering.

We rely on the supplier for product marketing and product management information. First we gather that info from the supplier’s website and press releases and, if the supplier is publicly held, from the supplier’s SEC filings. We speak directly with the supplier for anything else in these areas.

For functionality and implementation, the supplier typically gives us (frequently under NDA) access to the product’s user and developer documentation, the manuals and help files that licensees get. In this era of cloud computing, we’ve been more and more frequently getting access to the product, itself, through online trials. We also read any supplier’s patents and patent applications to learn about the technology foundation of functionality and implementation.

In addition, we entertain the supplier’s presentations and demonstrations. They’re useful to get a feel for the style of the product and the supplier and to understand future capabilities. However, to really understand the product, there’s no substitute for actual usage (where we drive) and/or documentation.

Our research process includes insisting that the supplier reviews and provides feedback on a draft of the Product Evaluation Report. This review process ensures that we respect any NDA, improves the accuracy and usefulness of the information in the report, and prevents embarrassing the supplier and us.

Ease of Evaluation, a New Evaluation Criterion

Our frameworks have never had an Ease of Evaluation criterion. We’ve always figured that we’d do the work to make your evaluation and selection of products easier, faster, and less costly. Our evaluation of Zendesk has us rethinking that. We’ve learned that our Product Evaluation Reports can speed and shorten your evaluation and selection process but that your process doesn’t end with our reports. You do additional evaluation, modifying and extending our criteria or adding criteria for criteria to represent requirements specific to your organization, your business, and/or application for a product. Understanding Ease of Evaluation can further speed and shorten your evaluation and selection process.

So, beginning with our next Product Evaluation Report, you’ll find that Ease of Evaluation criterion in our framework.

Zendesk Was Very Easy to Evaluate

By the way, Zendesk would earn an Exceeds Requirements grade for Ease of Evaluation. We did a 30-day trial of the product. We signed-up for the trial online—no waiting. During the trial we submitted cases to Zendesk Support and we used the Zendesk community forums. In addition, Zendesk.com provided a wealth of detailed information about the product, including technical specifications and a published RESTful API.

Scroll down to the bottom of Zendesk.com’s home page to see a list of UNDER THE HOOD links.

under the hood

Looking at the UNDER THE HOOD links in a bit more detail:

  • Apps and integrations is a link to a marketplace for third party apps. Currently there are more than 300 of them.
  • Developer API is a link to the documentation of Zendesk’s RESTful, JavaScript API. It lists and comprehensively describes more than100 services.
  • Mobile SDK is a link to documentation for Android and iOS SDKs and for the Web Widget API. (The Web Widget embeds Zendesk functionality such as ticketing and knowledgebase search in a website.)
  • Security is a link to descriptions of security-related features descriptions lists of Zendesk’s security compliance certifications and memberships.
  • Tech Specs is a link to a comprehensive collection of documents that describe Zendesk’s functionality and implementation.
  • What’s new is a link to high-level descriptions of recently added capabilities
  • Uptime is a link to info and charts about the availability of Zendesk Inc.’s cloud computing infrastructure
  • Legal is a link to a description of the Terms of Service of the Zendesk offering

We spent considerable time in Tech Specs and Developer API. We found the content to be comprehensive, well organized and easy to access, and well written. The combination of the product trial and UNDER THE HOOD made Zendesk easy to evaluate. And, we did not have to sign an NDA for access to any of this information.

Many suppliers make their offerings as easy to evaluate as Zendesk, Inc. made Zendesk for us. On the other hand, many suppliers are not quite so willing to share detailed information about their products and, especially their underlying technologies. Products and technologies are, after all, software suppliers’ key IP. They have every right to protect this information. They don’t feel that patent protection is enough. Their offerings are much harder to evaluate at the level of our Product Evaluation Reports.

Consider Products That Are Easy to Evaluate

We feel as you should feel that in-depth evaluations are essential to the selection of customer service products. You’ll be spending very significant time and money to deploy and maintain these products. You should never rely on supplier presentations and demonstrations to justify those expenditures. Certainly rely on our reports and use them as the basis for your further, deeper evaluation, including our new Ease of Evaluation criterion. Put those suppliers that facilitate these evaluations on your short lists.

Next IT Alme: Helping Customers Do All Their Work

On September 2, 2004, we published my article, “May I Help You?” It was a true story about my experience as a boy working in my dad’s paint and wallpaper store. The experience taught me all about customer service.

The critical lesson that I learned from my dad and from working in the store was customers want and need your help for every activity that they perform in doing business with you from their first contact with you through their retirement.

That help was answering customers’ questions and solving customers’ problems. That’s the usual way that we think of customer service, helping with exceptions, the times that customers can not do their work. But, that help was also performing “normal” activities on customers’ behalves—providing the right rollers, brushes, and solvents for the type of paint they wanted to use, for example, or collaborating with customers to perform normal activities together—selecting a paint color for trim or a wallpaper pattern.

At Kramer’s Paint, my dad or I delivered all of that help—normal work and exceptions work. In your business, you deliver the help to perform customers’ normal planning, shopping, buying, installing/using, and (account) management activities through the software of self-service web sites and/or mobile apps or through the live interactions of your call center agents, in-store associates, or field reps. And, you deliver the help for customers’ exception activities through customer self-service apps on the web, social networks, or mobile devices or through the live interactions of customer service staff in call centers, stores, and in the field.

Virtual Assistants Crossover to Perform Normal Activities

Recently, in our on customer service research, we’ve begun to see virtual assistant software apps crossover from helping customers not only with the exception activities to performing normal activities on customers’ behalves, activities like taking orders, completing applications, and managing accounts. We wrote about this crossover a bit in our last post about IBM Watson Engagement Advisor’s Dialog facility. And, we provided links to crossover examples of Creative Virtual V-Person at Chase Bank and Nuance Nina Mobile at Domino’s.

Alme, the virtual assistant software app from Spokane, WA based supplier Next IT, can crossover to help customers perform normal, too. In fact, Alme has always performed normal activities for customers. One of our first reports about virtual assistants, a report that we published on March 13, 2008, discussed Jenn, Alaska Airlines’ Alme-based virtual assistant. We asked Jenn to find a flight for us through this request, “BOS to Seattle departing December 24 returning January 1.” Jenn did a lot of work to perform this normal activity. Her response was fast, accurate, and complete. We asked Jenn again in our preparation for this post. “She” prepared the “Available Flights” page for us. Once again, her answer was fast, accurate, and complete. All that’s left to do is select the flights. The illustration below shows our request and Jenn’s response.

alaska airlines blog

Next IT Alme Provides Excellent Support for Normal Activities

Alme provides these excellent facilities for performing normal activities, facilities that are one of its key strengths and competitive differentiators:

  • Support for complex, multi-step interactions
  • Rules-based personalization
  • Integration with external applications
  • Let’s take a closer look at them.

Support for Complex, Multi-Step Interactions

For normal activities, complex, multi-step interactions help virtual assistants collect the information needed to complete an insurance or loan application, order a meal, or configure a mobile device and the telecommunications services to support it, for example. Alme supports complex, multi-step interactions with Directives and Goals.

Directives

Directives are hierarchical dialogs of prompt and response interactions between Alme virtual assistants and customers. They’re stored and managed in Alme’s knowledgebase and Alme provides tools for building and maintaining them. Directive’s dialogs begin when Alme’s processing of a customer’s request matches the request to one of the nodes in a Directive. The node presents its prompt to the customer as a text box into which the customer enters a text response or as a list of links from which the customer makes a selection. Alme then processes the text responses or the link selections. This processing moves the dialog:

  • To another node in the Directive
  • Out of the Directive
  • Into a different Directive.

That customers’ requests can enter, reenter, or leave Directives at any of their nodes is what makes Directives powerful, flexible, and very useful. Alme’s analysis and matching engine processes every customer request and response to Directive prompts the same way. When the request (re)triggers a Directive, Alme automatically (re)establishes the Directive’s context, including all previous text responses and link selections. For example, financial services companies might use Directives to implement retirement planning for their customers. The customer might leave the Directive to gather information from joint accounts at the bank with the customer’s spouse before returning to the Directive to continue the planning, opening, and funding of an Individual Retirement Account (IRA).

Goals

Goals let virtual assistants collect a list of information from customers through prompt and response interactions to help perform and personalize their activities. Virtual assistants store the elements of the list of information that the customer provides within virtual assistant’s session data for use anytime within a customer/virtual assistant session. Alme can also use its integration facilities to store elements of the list persistently in external apps.

Goals have the ability to respond to customers dynamically, based on the information the Goal has collected. For example, if the customer provides all of the Goal’s information in one interaction, then Goal is complete or fulfilled and the Alme virtual assistant can perform the activity that is driven by the information. However, if the customer provides, say, two of four required information items, then the Goal can change its responses and request the missing information, leading the customer through a conversation. Goals are created by authors or analysts who specify a list of variables to store the information to be collected and the actions to be taken when customers do not provide all the information in the list. In addition, Goals can be nested, improving their power and giving them flexibility as well as promoting their reuse.

Healthcare providers (Healthcare is one of Next IT’s target markets.) might use Goals to collect a list of information from patients prior to a first appointment. Retailers might use them to collect a set of preferences for a personal e-shopper virtual assistant.

Rules-Based Personalization

Personalization is essential for any application supporting customers’ normal activities. Why? Because personalization is the use of customer information—profile attributes, demographics, preferences, shopping histories, order histories, service contracts, and account data—to tailor a customer experience for individual customers. Performing activities on customers’ behalves requires some level of personalization.

For example, virtual assistants use a customer’s login credentials to access external apps that manage account or order data and, then, use that order data to help customers process a refund or a return. Or, to complete an auto insurance application, virtual assistants need profile data and demographic data to price a policy.

Alme’s rules-based personalization facilities are Variables, Response Conditions, and AppCalls. They are implemented within the knowledgebase items that contain the responses to customers’ requests.

  • Variables provide personalization and context. They contain profile data, external application data, and session data, for example.
  • Response Conditions are expressions (rules) on Variables. Response Conditions select responses and/or set data values of their Variables.
  • AppCalls (Application Calls) pass parameters to and execute external applications. They use Alme’s integration facilities to access external apps through JavaScript and Web Services APIs. For example, Jenn, Alaska Airlines’ virtual assistant, uses AppCalls to process information extracted from the customer’s question—departure city, arrival city, departure date and return date—and normalizes and formats the information for correct handling by the airlines’ booking engine. This AppCall checks city pairs to ensure the flight is valid and formats and normalizes dates so that the booking engine can display appropriate choices. AppCalls also integrate Alme with backend systems. Ann, Aetna’s virtual assistant, uses AppCalls to collect more than 80 profile variables from Aetna’s backend systems to facilitate performing tasks and to personalize answers for Aetna’s customers after they log in and launch Ann. (See the screen shot of Ann, below.)

Integration with External Applications

The resources that virtual assistant applications “own” are typically a knowledgebase (of answers and solutions to expected customers’ questions and problems) and accounts on Facebook and Twitter to enable members of these social networks to ask questions and report problems. So, to perform normal activities, virtual assistants need to integrate with the external apps that own the data and services that support those activities.

Alme integrates with external customer service applications through JavaScript (front end) and Web Services (back end) interfaces. New in Alme 2,2, the current Alme version, Next IT has introduced a re-architected Alme platform that is more modular and more extensible. The new platform has published JavaScript and Web Services interfaces to all Alme functionality and support for JavaScript and Web Services to external resources.

AppCalls use Alme’s integration facilities. To process an AppCall successfully, developers must have established a connection between Alme and an external application. Jenn integrates Alme with Alaska Airlines booking engine. Ann integrates Alme with Aetna’s backend systems. Here’s a screen shot.

aetna blog

Virtual Assistants Are Doing More of the Work of Live Agents

Next IT Alme was one of the first virtual assistant software products with the capabilities to perform normal activities. Its facilities are powerful and flexible. While integration with external applications will always require programming (and Next IT has simplified that programming), Alme’s facilities for supporting normal activities are built-in and designed for business analysts. They’re reasonably easy to learn, easy to use, and easy to manage.

By performing normal activities, virtual assistants are doing more of the work that live agents have been doing—quickly, accurately, consistently, and at a lower cost than live agents. That frees live agents to handle the stickiest, most complex customer requests, requests to perform normal activities and requests to answer questions and resolve problems. It’s also a driver for your organization to consider adding virtual assistants to your customer service customer experience portfolio.

The Dialog Feature of IBM Watson Engagement Advisor

We just updated our Product Evaluation Report on IBM Watson Engagement Advisor. It’s an update to our July 10, 2014 report. Both the scientists at IBM Research and the developers in the IBM Watson Group have been busy improving Watson and Watson Engagement Advisor, busy and productive enough to drive us to do the update. Here are the highlights:

  • Dialog is a new feature of Watson Engagement Advisor that provides facilities to support complex interactions between virtual assistants and customers. These interactions include prompt and response conversations as well as business processes, transactions, and supplementary questions. Dialogs can guide customers through the necessary steps to an outcome or help answer customers’ vague and ambiguous questions.
  • Knowledgebase additional input file support. MHT and ZIP files can be ingested into Watson’s knowledgebase, adding to HTML, Microsoft Word, and PDF file formats.
  • Watson Experience Manager is the visual toolset that subject matter experts use to configure, train, test, and administer Watson Engagement Advisor deployments. Improvements include new tools to configure Dialog conversations.
  • The Cognitive Value Assessment (CVA) is a consulting offering designed to help organizations identify use cases and benefits through examination of issues and pain points in their customer and end user business processes.
  • Product positioning. First contact self-service resolution

Dialog is the most important product improvement. Watson Group used technology from its May 2014 acquisition of Chatswood, New South Wales, AU virtual assistant software supplier Cognea to help build Dialog. The feature helps IBM catch up with its virtual assistant software competitors. The leading suppliers—Creative Virtual, IntelliResponse, Next IT, and Nuance—had all been offering Dialog-like capabilities for some time. Prompt and response conversations have become a key customer service requirement for virtual assistants. These conversations been the approach for answering vague or ambiguous customer questions, helping virtual assistants gather information from customers that concretizes or disambiguates questions (or problems) like, “I can’t get my printer to work,” or “What’s my balance?”

And, on that “What’s my balance?” example, prompt and response conversations are also an approach for supporting personalized tasks and transactions, a hot trend and an emerging requirement for virtual assistants to “act” more like live agents within their interactions with customers. Personalized tasks and transactions take a bit of application integration—therefore some programming—to access, retrieve, and use customer profile data and account data, but, once a virtual assistant has that data, it can perform a wide range of activities for customers in addition to delivering answers and solutions. Think of the Nuance Nina virtual assistant taking pizza orders for Domino’s (https://www.youtube.com/watch?v=noVzvBG0GD0) or Sara, a Creative Virtual V-Person virtual assistant helping consumers with online and mobile banking presenting account balances at Commercial Bank of Dubai (https://www.youtube.com/watch?v=rCvMJYQ0OT0).

Dialog is the mechanism that enables Watson Engagement Advisor virtual assistants to act more like live agents and web concierges. With that bit of application integration/ programming, they can perform personalized and/or transactional tasks for customers. Watson Engagement Advisor’s virtual assistants also have the advantages of Watson’s cognitive technology, which lets them have more flexible, more varied conversations. Watson can answer many types of questions, question that have answers like a simple fact, the definition of a term, the description of a topic, yes/no or true/false, and the steps in a procedure, or an approach to trouble shooting a problem. Dialog lets customers interject relevant but out of band questions within prompt and response conversations then, after the virtual assistant delivers the answer, either return to the Dialog or continue out of band interactions, perhaps, entering other Dialog flows. For example, in a customer/Watson Engagement Advisor virtual assistant session about property and casualty insurance, the customer might interject a question about extra coverage for jewelry in the middle of a Dialog implementing the application for a standard property policy. The virtual assistant can answer the jewelry question, answer any additional jewelry coverage questions, return to the policy application conversation, or even complete an application for a jewelry rider. For new customers, the policy application conversation will collect appropriate customer data and pass it to the external app. For existing customers, the virtual assistant will access the appropriate external app for the customer data. Live agents might do that data access manually from their desktops. Virtual assistants must do it with programming.

In the current Watson Engagement Advisor release, Dialog functionality and the tools are essentially what Cognea had built and offered. Watson Group’s developers are working hard to integrate the functionality more seamlessly within Watson Engagement Advisor and to integrate and improve the tools on the Watson Experience Manager toolset. We don’t think that Dialog is part of any of the five or so live Watson Engagement Advisor deployments, but going forward, we think that it will become part of most deployments. In fact, every virtual assistant should provide the capabilities to perform actions on behalf of customers.