Show HN: Multimodal perception system for real-time conversation (raven.tavuslabs.org)

26 points by mert_gerdan 4 hours ago

4 comments:

by ycombiredd 7 minutes ago

Hmm.. My first thought is that great, now not only will e.g., HR/screening/hiring hand-off the reading/discerning tasks to an ML model, they'll now outsource the things that require any sort of emotional understanding (compassion, stress, anxiety, social awkwardness, etc) to a model too.

One part of me has a tendency to think "good, take some subjectivity away from a human with poor social skills", but another part of me is repulsed by the concept because we see how otherwise capable humans will defer to "expertise" of an LLM due to a notion of perceived "expertise" in the machine, or laziness (see recent kerfuffles in the legal field over hallucinated citations, etc.)

Objective classification in CV is one thing, but subjective identification (psychology, pseudoscientific forensic sociology, etc) via a multi-modal model triggers a sort of danger warning in me as initial reaction.

Neat work, though, from a technical standpoint.

by Johnny_Bonk 9 minutes ago

Holy

by ashishheda 34 minutes ago

Wonder how it works?

by jesserowe 4 hours ago

the demo is wild... kudos

Data from: Hacker News, provided by Hacker News (unofficial) API