-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
integration openface in other projects #409
Comments
Talking about good timing. A professor in my lab just made OpenFace working with ZeroMQ (a brokerless messaging library) in real-time: #375. You might be interested in FACSvatar, a framework that uses OpenFace's FACS data to animate avatars: https://github.com/NumesSanguis/FACSvatar. Everything in that network is set-up in modules, using ZeroMQ, so it seems your module approach would right fit into there. If don't need real-time analysis, then you can do it without Windows if you use FACSvatar's My next step is trying to generate FACS using Deep Neural Networks, so that could also benefit from an emotion detection module ^_^ |
Hi, There are a number of ways you can integrate OpenFace in your own project, depends if you want to do "online" or "offline" integration. For offline you could just use OpenFace to process data and output .csv files that are then consumed by other modules. For online integration you could use the suggestion from @NumesSanguis and use a messaging library like ZeroMQ to communicate between OpenFace and your project. You could also integrate it using various inter process communication tools such as Named Pipes on Windows. Another option is to actually include OpenFace as a C++ library for your project (this would require a reasonable amount of engineering though). There are many other alternatives as well. Thanks, |
Dears,
Thank you both so much for introducing ZeroMQ, i'm sure in future it would
be so handy in my prj.
and sorry for late answer, actually my research turned and now I'm going to
test my model on just one emotion, sadness!
so in this new scenario, i know user's affective state but what would be so
crucial is the exact amount of his sadness!
So do you know if Openface (or any other software-lib-app etc) can measure
the exact value of emotion intensity?
from technical point i need to train my model for each user in offline to
be adapted with his personality, and then i have to test it in real time
interaction.
Thanks in advance,
Elahe
…On Mon, Apr 16, 2018 at 10:10 PM, Tadas Baltrusaitis < ***@***.***> wrote:
Hi,
There are a number of ways you can integrate OpenFace in your own project,
depends if you want to do "online" or "offline" integration.
For offline you could just use OpenFace to process data and output .csv
files that are then consumed by other modules.
For online integration you could use the suggestion from @NumesSanguis
<https://github.com/NumesSanguis> and use a messaging library like ZeroMQ
to communicate between OpenFace and your project. You could also integrate
it using various inter process communication tools such as Named Pipes on
Windows. Another option is to actually include OpenFace as a C++ library
for your project (this would require a reasonable amount of engineering
though).
There are many other alternatives as well.
Thanks,
Tadas
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#409 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ANcDWDZxOARpgWg4CpucMJDuwHhN2E0Sks5tpPqYgaJpZM4TVaC0>
.
|
OpenFace does not support emotion recognition, instead it recognizes facial expressions (Action Units). You could however, use the features extracted by OpenFace as input to building an emotion recognition system. Thanks, |
@elahia I would argue you cannot get an "exact" value of sadness. There are different theories of emotion, but I don't think anyone successfully quantified an emotion. Some argue this is due to limitations of technology, but do you ever hear a percentage of sad? In daily life you only hear about: a bit sad, sad and very sad. Another explanation could be that emotions are a social reality, not a physical reality, which means an emotion is always in a person's mind. That means you have as many 'sad's as persons viewing the scene. We can still communicate the feeling though, because our concept of sad for most people is similar. This means you maybe can only measure how much people's concepts match, and not get an accuracy value. If you're interested in that, look-up: Theory of Constructed Emotion, by e.g. Lisa Feldman Barrett. |
@NumesSanguis thank you so much for your explanation,
yes i see what you mean and i agree,
but i thought by wearable sensors and from heart beet and something like
this maybe we can have an approximation of it! Do you know if it is
possible?
Cheers,
Elahe
…On Fri, Apr 20, 2018 at 8:50 AM, NumesSanguis ***@***.***> wrote:
@elahia <https://github.com/elahia> I would argue you cannot get an
"exact" value of sadness. There are different theories of emotion, but I
don't think anyone successfully quantified an emotion. Some argue this is
due to limitations of technology, but do you ever hear a percentage of sad?
In daily life you only hear about: a bit sad, sad and very sad.
Another explanation could be that emotions are a social reality, not a
physical reality, which means an emotion is always in a person's mind. That
means you have as many 'sad's as persons viewing the scene. We can still
communicate the feeling though, because our concept of sad for most people
is similar. This means you maybe can only measure how much people's
concepts match, and not get an accuracy value.
If you're interested in that, look-up: Theory of Constructed Emotion, by
e.g. Lisa Feldman Barrett.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#409 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ANcDWPj9XFg3HLU7U-_449z4HN1av5X8ks5tqYU9gaJpZM4TVaC0>
.
|
@elahia There a lot of people in computer science who indeed believe in that approach. Following the Theory of Constructed Emotion, however, that still wouldn't be enough. It is still very useful information though, but you would still need to attach context to it. An example given by Lisa Feldman Barrett in her book “How Emotions are Made: The Secret Life of the Brain” goes something like this: One day she goes on a data with a guy. After that she feels weird in her stomach. Thinking back of her date, she relates this feeling to having "butterflies in her stomach" and thinks she must be interested in him. Later on the feeling gets worse, and it turns out she ate bad food, hence the stomach ache. Her bodily input didn't change, her interpretation however did. This shows that the data we get from our own body is more like pattern matching. We experienced similar bodily feelings before and we try to relate that to what happened to us. The theory says that bodily input isn't an emotion, until we have given an interpretation to it. This interpretation, however, is person-bound, so there is not ground truth for emotion. The information can still be used though, because every time we experience an emotion, we probably have similar input from our body to our brain. She calls this introspection. Another path of thinking is, say we have a machine that can 99% accurately determine a person's emotion. What use is knowing that a person is "sad"? A label is useless from an AI perspective to continue upon (except a more if-then rule statements). However, if we think in concepts, what we mean if we say that someone looks sad is, is that probably that person has lost someone/something dear to him/her. Which gives us an incentive to start a conversation and ask "What's wrong?". An emotion word is a useful communicative tool when both people have a similar concept. |
A TED talk by her from 2017-12 (18 min): https://www.ted.com/talks/lisa_feldman_barrett_you_aren_t_at_the_mercy_of_your_emotions_your_brain_creates_them p.s. Sorry for using your issue page for an unrelated discussion to OpenFace. |
@NumesSanguis Thanks for inspiring video, i'm sure others will use too
i think we need to retreat in this concept and redefine and reconsider
lots of other concepts which needs lots of time and financial supports
btw i will try to consider all these worthful concepts in my theses
…On Sat, Apr 21, 2018 at 9:22 AM, NumesSanguis ***@***.***> wrote:
A TED talk by her from 2017-12: https://www.ted.com/talks/
lisa_feldman_barrett_you_aren_t_at_the_mercy_of_your_
emotions_your_brain_creates_them
p.s. Sorry for using your issue page for an unrelated discussion to
OpenFace.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#409 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ANcDWLdwKOZhGxnU4f8r5YzZLINH4hg5ks5tqt4pgaJpZM4TVaC0>
.
|
Hey dears,
Can openface return facial features in real time video? like as eye's
corner position or lips corner position?
thanks
Elahe
…On Sat, Apr 21, 2018 at 8:22 AM, NumesSanguis ***@***.***> wrote:
A TED talk by her from 2017-12: https://www.ted.com/talks/
lisa_feldman_barrett_you_aren_t_at_the_mercy_of_your_
emotions_your_brain_creates_them
p.s. Sorry for using your issue page for an unrelated discussion to
OpenFace.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#409 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ANcDWLdwKOZhGxnU4f8r5YzZLINH4hg5ks5tqt4pgaJpZM4TVaC0>
.
|
Yes it can, but you will need to tap into the C++ code directly to access that. |
@TadasBaltrusaitis dear i really confused! would you please help me to use openface! so the best for me is getting my own webcam video by running matlab, and then analyses them ONLINE! in matlab. |
The Matlab version is not integrated with a webcam, it is used more for prototyping. It would also be too slow to run on a webcam for online analysis. For any real-time application C++ version is much more suitable. |
@TadasBaltrusaitis yes i see, now i'm working on c++, so far so good, thanks for your support, |
@TadasBaltrusaitis Dear Tadas, `
` then i tried to save the value of au_class.second in a separate array (AUTracker) for my own purpose, so i changed the code as follows: `
` but the problem is that the values in AUtracker are NOT the same as values in CSV file! Regards, |
Hi, No idea how can i fix it? Thanks, |
You can't just use |
If I understand correctly, OpenFace does some post-processing after analysing the whole video, hence the accuracy is lower when getting data in real-time? |
@TadasBaltrusaitis
if i print au_class.second; it shows something but if i save it in AUTracker (immediately after print command) it shows another value ... @NumesSanguis yes but i'm trying to fetch AU values exactly at the moment it wants to write into the CSV file, i mean after all analysing... |
There is actually a secondary post-processing step which runs after all of the data to the file has been written. The .csv file gets over-written after the video is processed. |
@TadasBaltrusaitis So the only way to get the data is reading the csv file! |
You can get it live as well, but the AU prediction will not be as accurate due to post-processing. Prediction of all the other features should be identical though. |
Yes but i think AU features are the most important features (at least in my prj), so maybe is better to get their exact values, as i checked in my case about 10% of prediction (maybe even lower) was different but its effect on my decision-making module is remarkably high, |
Hi,
Emotion detection is one of the tasks in my prj that i decided to do it by openface (thanks for lots of time saving).
Then in second step i need the outputs of openface, so how can i call (or integrate) openface in my own prj? I'm going to use docker for implementing different modules
other info: i'm using libfreenect2 on ubuntu 14.04 with c++ ,
Thanks in advance
The text was updated successfully, but these errors were encountered: