So I’ve had to pivot already - having discovered that my designs and goals were WAAAAAY beyond the scope, budget or proficiency level at all attainable.
I’m happy with where we are right now (in terms of our design) but I can’t help but think that others are finding out the same thing - that creating ANY kind of AI logic, goals or customization is WAAAAAAAY complicated and expensive.
NOTE: the solution is NOT to rely upon various ChatBot kits. These platforms do a dis-service IMHO to the potential of what ChatBots - CAN be!
Truly useful AI, and even NLP, isn’t quite there yet. I believe it will get there, but today we need to live in reality and build for the tech that actually works now.
Later we can upgrade everything with AI/NLP as it improves over time.
What kind of functionality and experiences do you want your AI to provide?
Big vision is for Synthetic Personality to become a reality. Back-of-the-napkin estimates is that that is a $50m+ vision.
Until then, we’ll be happy:
- looking at UGC and extracting keywords, thoughts and phrases - which can then be fed to…
- a “Conversation engine” - an intelligent Conversation engine
Anymore - and I’d have to kill you… (unless you’re looking for some early stage Angel investments - of course…)
I agree that its better to avoid the bot frameworks, but on another note, using a bot framework in our initial days of developing Engazify bot did give us a lot of leverage and scope for experimentation. Though currently we are hardly using 10% of the original framework components, but I am really glad that we got to use the framework (Botkit in our case) or else it would have taken us longer to hit the market.
AI is definitely expensive to build, but you could start off with basic stuff and gradually keep increasing the scope and tech. We are also taking a bit slow and seeing how the market is reacting to what we have built. It does not make sense to even build something really futuristic when people won’t even be using it. IMHO. But I am sure everyone has a different opinion and approach on this.
In my opinion, the problem with bots is not a technology, but a methodology problem.
Majority of chatbot frameworks try to build elements of technology - scripts, connectors to messenger platforms, intent recognition, broadcasting features etc.
But I did not see a framework that helps to reproduce a human-like behavior: conversation management, reasoning, topic navigation, empathy etc. Did I miss something?
So, for my degree project (at UC Berkeley) we built an NLP API that was aimed at understanding and generating intelligent questions from input text. Our initial focus was on doing it all through the NLP until we did some usability testing and realized that people didn’t want intelligent questions they wanted multiple choice questions.
That said, it would be interesting to see a progressive enhancement strategy for bot design that emphasizes “intelligence”, semantic enhancements, and external scripting technologies.
Well that is a great point isn’t it. When humans want something or want help they don’t typically want long conversations either, they just want options. So what place does a conversational bot that shows empathy really have? Therapy seems to be my only logical conclusion and while important, is a fairly restrictive context.
The main trick it would seem, is for the bot to be able to find its instruction amongst all the superfluous things and the varying ways in which we communicate. Do I really care that it emphasises with me in text? Not really. More so if it was a robot dressed as a human in front of me though.