The 2014 Afeka Conference for Speech Processing Conference will feature workgroups on speech processing related topics.
Participation is included in the standard registration fee.
The goal of each workgroup is to generate a discussion between the industry and academia on a specific speech processing technology or application. Workgroup discussions will focus on data exchange, identifying barriers, deliberating possible solutions and developing recommendations for advancing technologies in the field. Each workgroup will be chaired by a research or technology professional from the field and will include several short presentations from the academia, industry and application/technology end-users and a discussion.
If you are interested in giving a presentation in any of the workgroups, please contact: SPD-Technical@afeka.ac.il
See below the list of speakers for each workgroup .
All conference participants will be invited to join the workgroup of their choice.
Cognitive Computing Workgroup
Chairmen: Moshe Wasserblat, Intel
Is the question "can computers think?" is similar to the question "can submarines swim?” Cognitive computing is an emerging paradigm in intelligent computing methodologies and systems that implements computational intelligence by autonomous inferences and perceptions inspired by the mechanisms of the human brain. This workgroup will focusing on Cognitive Computing challenges and will enable open discussion on future directions and new initiatives specifically in the Israeli academic and industries.
Speakers:
- Moshe Wasserblat, NLU Architect, INTEL
"Introduction"
- Michel Assayag, Manager of Cognitive Computing, INTEL
"Cognitive Computing at Intel"
- Prof. Ronen Feldman, Hebrew University
"Unsupervised Information Extraction"
- Yoav Degani, Founder & CEO, VoiceSense
"Speech Personality Profiling"
- Michal Rozen-Zvi, Senior Manager, Analytics Department, IBM
“IBM Vision of the New Cognitive Era”
Mobile Multimodality Workgroup
Chairmen: Dr. Nava Shaked, HIT & BBT and Mr. Eran Aharonson, Afeka
This workgroup will present an integrated ecosystem required to build successful mobile solutions composed of various complex and mixed technologies in the area of: Smartphones, Tablets, Automotive, TV, wearable computing and Gaming. It will cover the value chain components that are necessary for a solid and high-quality solution: Starting with low level hardware & sensors, DSP methods for enhancements, various recognition and Machine learning methodologies. Companies and researchers will be invited to present solutions and applications based on Multimodal integration with an emphasis on industry-oriented solutions that effect business and customer experience via multimodal technology interface design.
The Target Audience will inlcude:
- Industry leaders & users
- Innovative prediction, recognition and machine learning methods and algorithms developers
- Developers and Integrators of Multimodal solutions and mobile applications
- User interface designers
Speakers:
- Nava Shaked, HIT & BBT & Eran Aharonson, Afeka college
“Mobile Multimodality”
- Shlomo Peller, CEO & Founder, Rubidium.
“Embedded Speech Recognition as Means for Always-On Hands-Free User Experience”
- Eli Jacobson, Design entrepreneurship Shenkar College & HIT
“Wearable's - Technology, Design and Fashion”
- Boaz Zilberman, CEO & Founder, Project RAY
“Eye-Free User Interfaces”
- Itay Katz, Co-Founder & CTO, Eyesight Technologies
“Touch Free Interface - Machine Vision”
- Gal Melamed, CEO & Founder, WonderVoice
“Touch-less Voice Assistance in the Land of Non-formal Social Networks & Apps”
Speech Recognition Workgroup
Chairman: Dr. Irit Opher, NICE Systems
This workgroup will focus on innovations in ASR and their possible and future use in the industry. We will discuss recent advancements in training acoustic models, e.g. the use of DNNs, different approaches to linguistic modeling and the resulting improvements in commercial and security applications. We will try to answer a few questions such as:
- Is the problem of obtaining, storing, and processing vast amounts of data always solved by un-supervised learning?
- Are there new ways of supporting non-native speech?
- Is there good coverage of children's speech?
- What new areas of research would the industry like to see for the upcoming challenges in speech recognition and speech analytics?
Companies and researchers from all areas are invited to contribute with interesting questions, insights and hopefully some solutions based on experience.
Speakers:
- Dr. Irit Opher, Content Analysis Manager, Cyber & Intelligence Solutions Division, Nice Systems
"Overview - New Solutions for Old and New Problems"
- Yishay Carmiel, CEO, S-Infinity
“Deep Learning for Speech Recognition”
- Prof. Sharon Gannot, Faculty of Engineering, Bar-Ilan University
“ASR and Dereverberation”
- Ron Hecht, Senior Researcher, GM Advanced Technical Center
“Adjusting Language Models based on Driving Workload”
Speaker VRD Workgroup
Chairman: Dr. Itsik Lapidot, Afeka
The focus of this workgroup will be on speaker verification, recognition and diarization (VRD) technologies and their use in exsisting and future applications; the benefits and challenges involved in these areas; new technological innovations currently being implemented; and a wish list for future capabilities. Presentations will be made by companies that have already integrated Speaker verification, recognition or diarization into a specific application or service; those that are looking to integrate any of these technologies; and technology researchers or developers in the field.
Speakers:
- Dr. Itsik Lapidot, ACLP, Afeka
"The Challenges of Speaker Verification, Recognition and Diarization"
- Pierre-Michel Bousquet, LIA and IUT, University of Avignon
"An Overview of LIA Laboratory Thematics: Focus on Solutions Proposed for Speaker Verification / Recognition / Diarization"
- Alon Pinhasi, Senior Speech Researcher, NICE Systems
"NICE Systems - Speaker Technology Overview"
- Dr. Gennady Karvitsky, Principal Biometric Researcher, Nuance Communications
"Nuance Voice Biometrics Review"