Wondering how we measured "belief-speaking" and "fact-speaking"? We used a neat approach called "distributed dictionary representation" (see https://t.co/PGEEZ9wdl1) that allows us to measure the similarity of a text to the two honesty components. Here are
RT @daniel_t_thiele: Implementing the 'Distributed Dictionary Representation' method introduced by Garten et al. (2018) https://t.co/RV4k9E…
Implementing the 'Distributed Dictionary Representation' method introduced by Garten et al. (2018) https://t.co/RV4k9Ei4Qx
Instead, we use a technique based on a natural language model that encodes the semantic similarity of words (i.e., meaning). So we can measure how much the words in a tweet are similar to moral virtue/vice words. https://t.co/cI9xaz4ykf
@anidr0id No! But it appears that an increasing number of studies is relying on these seed word/embedding approaches to create dictionaries. I am still a bit doubtful of their validity and transparency. Check: https://t.co/TMsUQ6lsQj
@Abebab @richardveryard @MortezDehghani Here's the method article: https://t.co/5eXCP9WWlI Also, see here how this method can be applied to generate new research questions that can be followed up in the lab: https://t.co/FliXrNYLU3
@Dellea Sure, that's done using word2vec. Here's an explanation of that: https://t.co/p2SQHAMhVI Here's a paper that uses it to do content analysis of social media: https://t.co/5eXCP9WWlI
@JCSkewesDK Similar to what @RockBerta is proposing, you can also use DDR, developed by @MortezDehghani and colleagues, to build your own dictionary of masculinity/feminity: https://t.co/k9lwN5oxGo The code for that is available on github: https://t.co/5U
Our paper on measuring latent semantic content in natural language is out: https://t.co/GY5c0mYwZg
Dictionaries and distributions: Combining expert knowledge and large scale textual data content analysis https://t.co/2xtZfeRNFt
Dictionaries and distributions: Combining expert knowledge and large scale textual data content analysis https://t.co/76PUtL8xR1 BehResM