News of the development of key technologies that fall HC Security Network, the "Voice" As the field of artificial intelligence landing the most mature technology, accurate rate of 95% recognition level, has gradually entered the commercialization stage. The realization of voice interaction mainly depends on two points: speech recognition and semantic understanding.
At present, China's intelligent voice market is occupied by the majority of the market share of Keda Xunfei, Baidu and Apple. In 2015, the three companies together accounted for 79%. Among them, HKUST's market share is 44.2%, which is in the market leading position. Baidu has entered a strong momentum and its market share has grown rapidly.
First, smart voice: talking about the entrance too early, but indispensable
Smart speakers are carnivals on the table, and the essence is still users, data and services.
Since November 2014, Amazon launched the smart speaker Echo based on voice interaction. In 2015, the University of Science and Technology released a smart speaker. In 2016, Google released the smart speaker GoogleHome.
After entering 2017, it was even more intensive. In May, Lenovo released smart speakers. Amazon released EchoShow with touch screen. Microsoft teamed up with audio equipment manufacturer Harman Carton to create Invoke. In June, Apple released HomePod. At the same time, domestic Internet giants such as BAT have also intentionally entered the game.
Smart speakers are not an end in themselves, they compete for the users, data and service portals behind them. In fact, the entrance product based on voice interaction technology can be a home product such as a speaker or a TV, or even a smart device that can be accessed indoors. The reason why the speaker is selected as a breakthrough is only to pay attention to such a function that can bear other functions in the initial stage. Carrier.
For the user, what is needed is a tool that can encapsulate many complicated applications and interfaces, no longer need to take the initiative to obtain services for each specific application, but provide a unified portal for voice interaction. For the giant company, the goal is to get a portal for the user to obtain user data and continue to provide services after the mobile Internet.
In the case of smart speakers alone, the interactive experience and connected services are important factors influencing user choice. Abandoning the setting of the smart speaker, the essence is an intelligent hardware based on human-computer interaction based on voice. At the algorithm level, it involves natural language understanding such as noise reduction, far-field recognition, wake-up and interruption, multi-round conversation and semantic analysis. The technical and hardware aspects mainly involve microphone array technology for sound collection and speaker processing for sound playback. The synergy between hardware and software can make human-computer interaction more natural.
If playing music is the main function of traditional speakers, then for smart speakers, this has almost become an incidental option. It is not, or is not just a matter of sound quality, more of a human-computer interaction experience, and interaction. The quantity and quality of services that can be supported and compatible behind the scenes. Whether it is the Internet service on the wiring or the offline smart home series products, if the ecological closed loop of products, applications and data cannot be formed, the entry goal of the smart speaker will be difficult to achieve.
The bleak sales of domestic smart speakers are also related to consumption habits, and it takes time for users to cultivate. Compared with Amazon Echo's tens of millions of sales, the sales of å®å’š å®å’š è”åˆ å›½å†… 与 与 与 与 与 与 似乎 似乎 似乎 。 。 。 。 。 。 。 。 。 。 In addition to the possible differences in technology and application, the soil environment that is rooted in them is also inherently different.
If the concept of "smart" is removed, the smart speaker is first a speaker. Compared with the household speaker penetration rate of more than 85% in Europe and America, the domestic even less than 20%, the difference in the demand for music and speaker equipment leads to the speaker to the European and American people. Perhaps it is "life just needs", and domestic users may still be just a few people's hobbies. As mentioned above, the speaker just happens to be one of the carriers. The core of the speaker is still the human-computer interaction portal of the intelligent terminal in the Internet of Things era.
Information acquisition and expression determine that voice interaction becomes an indispensable part of the stage
The human-computer interaction in the Internet PC era mainly relies on the mouse and keyboard. The touch screen interaction in the mobile Internet era has become the standard. Then, which way will the interaction in the artificial intelligence era be dominated? Is the smart speaker based on voice control still a smart TV?
These may be the entry point for smart homes, but even if Amazon Echo has reached tens of millions of sales and more than 10,000 skill points, it seems that it is not enough to become the interactive entry of the artificial intelligence era.
From the perspective of information acquisition and expression, the evolution of the interactive portal must be a revolution from habit to instinct. From the perspective of information acquisition, research shows that people's various sensory organs obtain information sources from the outside world: visual 60% + hearing 20% ​​+ touch 15% + taste 3% + smell 2%.
Among them, the total of visual, auditory and tactile sensations is as high as 95%. Based on this, it may not be difficult to understand why the PCs in the Internet era and the smart phones in the mobile Internet era can not only leave the mouse and keyboard set and touch sensors, but also cannot leave the piece or Large or small display.
From the point of view of information expression, in 1967, the famous American psychologist and communication scientist Albert Merabian and others conducted a large number of experiments and proposed that humans express all information in communication = body language information 55% + sound information 38% + linguistic information 7%, perhaps this can explain to some extent why each smart speaker has appeared on the scene but still failed to provoke the entrance beam.
We believe that from the abstract symbol input by the mouse and keyboard to the direct sliding and pressing of the touch screen, this has been close to human habits to a certain extent, and the future interaction will be closer to human instinct.
Voice may be a phased result of human-computer interaction. Voice-based human-computer interaction may become the entrance to a specific scene, but the fusion of voice and body movements may be more likely to be an interactive portal of an era. In the future, there may be other ways like brain waves.
Second, intelligent voice related technology and development history
Intelligent voice mainly studies the processing and feedback of voice information between man and machine. From the aspect of performance, it is to study how to realize human-computer interaction through voice. Relevant support technologies can be divided into basic voice technology, intelligent technology and big data technology. .
Speech recognition accuracy is rapidly improved after the introduction of deep learning. The goal of speech is to enable the machine to ultimately recognize information such as content, speakers, and language in the voice. The technical ideas have gone through two stages based on standard template matching and statistical model based (HMM);
In 2010, Yu Dong and Deng Li from Microsoft cooperated with Hinton to introduce deep learning to replace traditional feature extraction in the field of speech recognition. With the introduction of deep learning and the combination of various models derived from this, speech recognition The accuracy rate has increased significantly.
In March 2017, through the combination of long and short memory, WaveNet language model and three strong acoustic models, IBM's telephone speech recognition error rate was reduced to 5.5% on the Switchboard dataset, regardless of the human shorthand given by Microsoft's 2016 test results. The 5.9% error rate is still 5.1% of the humans given by IBM, and the machines are very close to human level.
Speech synthesis has a long history of more than 200 years, and its performance has yet to be improved. Before the advent of computer technology, the main hardware was simulated by the principle of human body sound. After the appearance of computer technology, the sound quality, tone and naturalness were improved. With the evolution of technology, the complexity, naturalness and sound quality of speech synthesis have achieved good results. The current research focuses on improving the expressiveness of synthetic sound, such as tone and emotion.
Voiceprint recognition is currently developing in the direction of deep learning, but whether using traditional algorithms or deep learning, it is necessary to establish a voiceprint library in advance.
Voiceprint recognition is based on the speaker's physiological and behavioral characteristics of speech waveform feedback, and automatically recognizes the speaker's identity. It is comparable in security to biometrics such as fingerprints, palms, and irises. It has been used in public security and judicial systems. Identification in the identification, and identity authentication of the bank payment process.
The combination of voiceprint recognition and voice recognition can prevent recording and counterfeiting by recognizing content, and combined with emotion recognition, it can be perceived whether the recognition object is under stress. Voiceprint recognition requires a corresponding soundprint library, and at least a reasonable gender, age, region, accent, and occupational distribution must be guaranteed.
The test sample should cover the main influencing factors such as text content correlation, acquisition equipment, transmission channel, environmental noise, recording playback, sound simulation, time span, sampling duration, health status and emotional factors, so the voiceprint database becomes a breakthrough in voiceprint recognition technology. The important threshold. At present, the most complete is the voiceprint identification library of the Ministry of Public Security.
Natural language understanding is still in the stage of shallow semantic analysis, which mainly includes three levels of lexical analysis, syntactic analysis and semantic analysis.
At present, the machine's understanding of sentences can only be achieved at the level of semantic role labeling, that is, the sentence components and the active-passive relationship in the marked sentence, which belong to the shallow semantic analysis technique. In the future, let the machine better understand the human language and realize natural interaction, and advance the machine learning methods such as deep learning.
Multi-round dialogue is mainly based on technologies such as speech recognition, synthesis and natural language understanding. The naturalness and accuracy need to be improved.
The multi-round dialogue system is generally divided into task type and gossip type. The task type is to assist the user to complete a specific task, such as setting an alarm clock, checking the weather, etc., while the chat type is to realize the emotional chat interaction of the human-machine, such as the accompanying robot. . Multi-round dialogue improves the naturalness and accuracy of user interaction compared to single-round dialogue.
Dialogue management is the core of the multi-round dialogue system. The functions are divided into dialog state tracking (DST) and dialogue decision (DialogPolicy). The former function is to update the conversation state, record all the chat records and system behaviors of the users so far, and the latter is based on The DST dialog state produces system behavior, which determines the next feedback or call.
Third, the development status of intelligent voice industry
The market scale has expanded rapidly, and the domestic growth rate has significantly exceeded the global
Driven by the development of technologies such as mobile Internet, big data, cloud computing, and deep learning, intelligent voice technology has gradually matured, and industry development has entered the stage of scene application layout. Applications in the fields of mobile Internet, smart home, automotive, medical care, and education have driven the rapid growth of the smart voice industry.
In 2015, the global smart voice market reached US$6.21 billion, a year-on-year increase of 34.2%. The scale of China's intelligent voice industry market has gradually expanded. In 2015, the industry scale of 4.03 billion yuan accounted for 10% of the global market share, and the growth rate is significantly higher than the global market. It is expected that the share of shares will increase to 14% in 2017.
All elements work together to promote intelligent voice to form a complete industrial chain
Borrowing the artificial human intelligence application “human-machine method loop†model proposed in our pre-sequence report, the intelligent voice industry has formed a relatively complete development under the five elements of talent reserve, computing facilities, data accumulation, technical algorithms and application scenarios. Industry chain.
From the perspective of the industry chain, the intelligent voice industry can be divided into four parts. Basic research institutions: R&D and technical output of basic technologies such as speech synthesis, speech recognition, voiceprint recognition; speech semantic data providers: providing speech, semantic databases and customized data collection and processing for algorithm research or technology output organizations;
Voice technology provider: Convert basic technology into software or industry-wide solutions, providing embedded or platform-based voice software services, industry-wide intelligent voice system solutions;
Intelligent voice application providers: intelligent mobile devices such as smart mobile devices, smart in-vehicle systems, smart homes, and various APP or software clients such as input methods and entertainment. From the perspective of product attributes, mainly include consumer products and professional-level industries. application.
The algorithm dividend gradually disappeared, and one big one turned to multi-party competition.
With the introduction and development of deep learning, the algorithmic bonus of intelligent speech is gradually disappearing. Nuance has been the world's largest voice technology vendor since its merger with ScanSoft in 2005. With its advanced voice recognition, natural language understanding technology and excellent voice solutions, Nuance has occupied 62% of the global voice market in 2012, plus Google and Microsoft. Total accounts for more than 85%.
In 2010, deep learning introduced speech recognition for the first time. Then with the improvement of computing power and the accumulation of massive voice corpus data, the recognition accuracy was greatly improved. Although Nuance's global market share still ranked first in 2015, it has fallen sharply to 31.6%, while Google, Apple, Microsoft and HKUST's market share has grown rapidly, reaching 28.4%, 15.4%, 8.1% and 4.5% respectively.
The technology giant's open source for deep learning algorithms and machine learning frameworks makes the call to intelligent voice technology much simpler, and the modular design makes application deployment and implementation thresholds significantly lower.
At present, China's intelligent voice market is occupied by the majority of the market share of Keda Xunfei, Baidu and Apple. In 2015, the three companies together accounted for 79%. Among them, HKUST's market share is 44.2%, which is in the market leading position. Baidu has entered a strong momentum and its market share has grown rapidly.
The top ten breakthrough technologies announced in 2016 by the US authoritative magazine "MIT Technology Review", Baidu Silicon Valley's DeepSpeech2 intelligent voice technology is impressive. Internet giants such as Google, Microsoft, Apple, and Baidu have obvious advantages in terms of capital, data, and user development of 2C applications. The strong involvement of all parties will transform the global intelligent voice industry from a single big to a multi-party competition.
Technology-driven, scenario applications and positive feedback with data, the three become the main barriers of the intelligent voice industry. Technical algorithm barriers: With the development of intelligent speech technology, speech recognition technology is becoming more and more mature, open source speech recognition tools reduce the threshold of speech recognition, but the stability of the use process remains to be solved.
Speech recognition technology has entered the critical point of breakthrough volume change to qualitative change. The research and development of related technologies and supporting facilities can build a moat for enterprises. Baidu, Sogou, Keda Xunfei and other companies have 97% accurate speech recognition in quiet state. Developed for applications with higher accuracy and non-standard environments.
Application scenario barriers: 2B applications involve finance, telecommunications, medical, transportation and other industries. These industries have very high requirements for system stability. They attach great importance to practical application cases and will select the most powerful and experienced intelligent voice through strict bidding. Technology and service providers, once through the evaluation will maintain stable cooperation, new business barriers to entry; 2C application level Internet has great commercial applications and information access advantages.
Data accumulation barrier: The key to intelligent user voice application experience and customer viscosity improvement in various scenarios is to accumulate various voice data and text data in the real environment for iterative optimization. After the intelligent voice application, the data closed loop will continue to enhance the barrier advantage.
At present, participants in the intelligent voice industry can be divided into three types: independent voice technology R&D and service providers from research laboratories, such as Nuance from STAR Lab of Stanford Research Institute and HKUST in cooperation with University of Science and Technology of China. Xunfei, a start-up company that develops and applies all aspects of intelligent voice technology.
Such as Si Bi Chi, Yun Zhisheng, go out to ask questions, voice technology, triangle beasts, cumin and other startups, hope to seize the next generation of human-computer interaction technology giants such as Apple, Google, Microsoft, Amazon, Baidu, Tencent, Sogou and so on.
Beginning in 2010, Internet giants have in-depth layout of the intelligent voice industry through independent research and development or mergers/shares. The layout of intelligent voice focuses on virtual assistants, and in order to occupy a certain market opportunity, they have begun to lay out market segments such as smart car, smart furniture, smart medical, and wearable devices.
Converging scenarios and leveraging hardware to improve the practicality and stability of voice technology
Due to the diversity and complexity of speech signals, accuracy ratios are greatly reduced in real-life scenarios, taking into account spatial distance, background noise, other vocal disturbances, echoes, dialects, accents, and the like. Improving the user experience in real-life scenarios is the key to the breakthrough of intelligent voice technology. The technologies involved include far-field speech recognition, wake-up target detection, full-duplex interaction, and personalized recognition technology.
In October 2016, Intel and Keda Xunfei announced a joint research and development of AI chips, integrating microphone array and far-field speech recognition into the SOC to form a complete far-field voice interactive chain.
At present, the speech recognition of near-field and pronunciation standards is quite mature. Siri can be regarded as this type on the mobile phone. The accuracy of speech recognition in the near-field and quiet environment of domestic science and technology majors, Baidu, Sogou, etc. has been raised to 97. %the above.
However, for far-field speech recognition, although it is almost the same as the near-field in terms of technical principle, due to the increased spatial distance between the sound source and the microphone, the sound propagation process will be affected by other human voices, echoes, etc., in specific use. The accuracy rate in the scenario still improves the technical requirements for both hardware and software.
Fourth, the application prospects of intelligent voice
The giant grabbed the virtual voice assistant and gradually cut into the scene application.
The characteristics of voice interaction are simple, fast, free of hands and eyes, and can bring huge experience optimization to users in many scenarios. For example: to avoid cumbersome operations: many mobile phones directly use voice to enter an application.
Very small or no screen: In a smart TV, a specific program is directly opened by voice instead of a remote control that is inconvenient to operate, and the Internet is accessed through a smart wearable device. Hand and eyes have no time to take care of: such as driving process, meeting minutes. Exploring the value of voice data: such as the use of electronic medical records voice data to assist in the diagnosis and treatment.
Internet Queen Mary Meeker pointed out in "Internet Trends 2016" that voice interaction will become a new paradigm for human-computer interaction. In fact, with the gradual maturity of intelligent voice technology and the development of natural language understanding, voice has become an important means of interaction between people and smart devices in different scenarios.
The giants have cut into intelligent voice application scenarios with virtual voice assistants. Because the Internet of Things involves too many fields, cross-platform, cross-device, cross-brand and many other factors restrict the development of the industry. The unified standard is the foundation for the development of the Internet of Things industry. On this logic, Google, Microsoft, Amazon and other technology giants have adopted intelligent virtual As an entry point, the assistant creates an open platform and attracts developers to build an application development ecosystem in an open source format.
For consumer-grade products and professional-grade industry applications, the number of virtual digital assistant users and market size are growing rapidly. The availability of related technologies such as voice and semantics has gradually increased, bringing the expansion of the virtual digital assistant market. From the application direction and scene, it is mainly used in consumer products and professional-grade industry applications.
The consumer market is mainly 2C or 2B2C, which is used in lifestyles such as mobile phones, smart cars, smart homes, and wearable devices. Professional-level industry applications are mainly 2B, applied to specific scenarios, such as medical, education, call center, court trials and other industries.
According to Tractica's forecast, the number of active consumer virtual assistant users will increase from 390 million in 2015 to 1.8 billion in 2021. The number of active enterprise virtual assistant users will increase from 155 million in 2015 to 843 million in 2021. The virtual assistant market will grow from $1.6 billion in 2015 to $15.8 billion in 2021.
Consumer product application scenario
The function of the intelligent virtual assistant in the consumer market is to implement device control, schedule management, information inquiry, life service, emotional companion, etc. based on voice interaction.
On the one hand, it can access the third-party applications and services through the open platform to enrich the functions of the intelligent virtual assistant. At present, the mobile virtual assistant is to connect the various types of APP to conquer the terminal, and on the other hand, it can be implanted into the intelligent hardware terminal to the car, home, and Extend products such as wearable devices to establish an ecosystem of consumer-grade smart voice products.
Intelligent voice + car
Both hands and eyes are occupied while driving, and voice interaction becomes the most appropriate way to interact in this scenario. The combination of intelligent voice and car is mainly for intelligent car products, which complete navigation, music search and playback, and information dictation through voice. With the development of the Internet of Vehicles, the future will be further integrated with social, entertainment, catering and other services to enhance the driving experience on the premise of ensuring safety.
The data of related research institutions such as Tencent Auto show that the role and importance of voice interaction has been increasingly recognized by car owners during the iterative update process of intelligent in-vehicle systems. IMSResearch predicts that by 2019, 55% of new cars worldwide will be equipped with intelligent voice systems.
In the field of smart cars, Nuance, Apple, Google, Keda Xunfei, Baidu and other voice recognition giants launched the DragonDrive car voice development platform, CarPlay, AndroidAuto, car language system, CarLife and other intelligent car systems, and have reached cooperation with car manufacturers. To seize the smart car emerging market.
Smart voice + home
The smart home industry is in a period of rapid development, and voice control has gradually become a common skill point. Intelligent voice can be combined with TV, audio, air conditioner, curtains, lamps, toys and other home equipment and smart home control center system to achieve an entrance control function through voice interaction.
The promotion of big data and artificial intelligence technology, the decline of key technologies and component costs, and the establishment of industry alliance standardization agreements have led to a rapid growth in the smart home market. According to Statista research data, the global smart home market has reached US$16.8 billion in 2016, of which the Chinese market accounts for 7%. It is estimated that by 2021, the global smart home market will reach US$79.3 billion, and the Chinese market share will rise to 17%.
Foreign Internet giants have entered the smart home field by combining smart home products with intelligent voice. Apple launched the HomeKit smart home platform in 2014 and continues to integrate with Siri.
Amazon launched the Echo smart speaker equipped with Alexa in 2014. It can play music, news, online shopping orders, Uber calls, and take-outs through voice. According to estimates by CIRP and RBCCapitalMarket, Echo has been released since its launch in 2014. The cumulative sales volume of the series products is close to 10 million units, and the sales volume reaches 800-100 million US dollars.
In 2016, Google launched the GoogleHome smart speaker, equipped with GoogleAssistant virtual assistant, and actively strengthened the layout of GoogleAssistant in the smart home field. From the layout of the Internet giants, it can be seen that the integration of intelligent voice and smart home is the general trend.
Smart voice + wearable device
Wearable devices are constrained by hardware form, and voice interaction has significant advantages over touch interaction. The introduction of intelligent voice technology frees devices from smartphones and creates an independent experience. For example, go out and ask Ticwear's built-in SMD SIM chip and 3G communication module. It has an independent communication number and can realize real-time online. It supports all Chinese voice interaction including voice dialing, SMS, photo, WeChat voice reply, voice search and other functions. .
The penetration of intelligent voice on wearable devices has contributed to the growth of the wearable device industry and smart voice applications. Apple released the wireless headset AirPods in 2016 to enable voice interaction with Siri on Apple phones.
According to the online sales report of the US wireless headset market released by market research company SliceIntelligence, Apple's AirPods released in 2016 quickly occupied 26% market share in the wireless headset market after one month of market launch.
According to the forecast of the Prospective Industry Research Institute, the market size of China's fitness and sports wearable devices will increase from 9 billion yuan in 2015 to 24.4 billion yuan in 2021, with a compound annual growth rate of 18%. Smart voice on wearable devices. Infiltration will promote the rapid growth of the smart voice industry.
Professional industry application scenario
The professional-level market virtual assistant is suitable for a variety of application scenarios. From the perspective of implementing functions, the main forms are speech recognition and transcription, as well as the analysis of speech and semantic content. The three areas of medical, education and customer service are examples, the depth of speech technology and scene. Fusion will be the moat that will build the application side for technical barriers.
Intelligent voice + medical
There are three main applications of intelligent voice in the medical industry: voice guidance robot; electronic medical record voice input and transfer, clinical report voice input and transfer. Voice recording greatly improves doctors' work efficiency and quality of work; patients can download and print through the voice electronic medical record system, and have a complete and clear and easy-to-understand medical record; the hospital can scientifically manage the diagnosis and treatment process and medical information.
With the accumulation of voice medical records, the use of big data technology and deep learning technology can explore the value of medical case speech data, and achieve intelligent auxiliary diagnosis and treatment.
Nuance is a global leader in intelligent voice medical solutions. Nuance's medical solutions have covered 72% of medical institutions in the United States, with customers in more than 30 countries around the world, and more than 300 million doctors and patients exchange data every year.
Providing services to more than 500,000 doctors and 10,000 medical institutions each year, medical products are also diversified: clinical document improvement (CDI), clinical speech recognition, real-time dictation, computer-aided coding, medical quality control, mobile cloud computing Wait.
The domestic science and technology university Xunfei also actively deployed the medical field. In 2016, the speech-based outpatient medical record collection system was officially entered into the pilot with the National Engineering Laboratory of Oral Digital Medical Technology and Materials of Peking University Stomatological Hospital. At present, the intelligent voice system of the University of Science and Technology has been More than 20 hospitals including Peking University Oral, Ruijin Hospital and 301 Hospital have been used.
Intelligent voice + education
The application of intelligent voice in education mainly focuses on the core requirements of “learning, training, testing and evaluation†under the education system. The main products include intelligent voice training and evaluation, and interactive teaching.
As the pioneer of domestic intelligent voice application in education, Science and Technology Xunfei has applied intelligent speech technology to products such as oral training and examination, interactive teaching and early childhood education intelligent hardware, and the semantic analysis technology based on speech has gradually begun to gradually develop. Applied to subjective review and other links.
Intelligent voice + customer service
The combination of intelligent voice and customer service can be applied in the combination of finance, telecommunications, transportation, intelligent voice and customer service. It can be used in various industries such as finance, telecommunications, transportation, O2O, tourism, etc. The main forms are, tourism and other industries, the main forms There are intelligent question and answer, voice quality inspection, corpus mining, and privacy protection.
Compared with traditional customer service, the introduction of intelligent voice can play three roles: reduce operating costs, intelligent customer service to effectively reduce customer service seats, reduce training costs, intelligent voice quality inspection can improve quality inspection efficiency, reduce quality inspection labor costs.
Improve marketing capabilities, intelligent customer service can achieve rapid response, provide quick and unified answers to key issues and hot issues, ensure service standardization 24 hours a day, online to provide customers with problem solutions to assist business decisions.
Speech recognition full-text transcription can achieve full customer service quality inspection, and can use natural language processing technology to analyze text, mine customer information, and assist in formulating corporate business strategies. Fully respect customer privacy, hide the true identity of customers, and prevent manual customer service from harassing customers.
According to China Industrial Information Network, the total number of call center seats in China reached 850,000 in 2014. The size of call center seats has maintained steady growth in recent years. With the disappearance of the demographic dividend, the demand for intelligent customer service will become stronger and stronger. Voice has a large penetration space in the customer service field.
The application of intelligent voice in the field of call centers has been extensive, and Nuance, Keda Xunfei, Tencent, and Alibaba have all laid out corresponding businesses.
Editor in charge: Zhang Zequn
Toner Cartridge,Newlytoner Cartridge Compatible,Canon Printer Cartridge,Toner Cartridge Compatible Sp6430
jiangmen jinheng office equipment Co. Ltd. , https://www.jhtonercartridge.com