News

Social Intelligence Stories that Caught Our Attention: Volume Eight

Here’s the stories that caught our attention across the web the week of 3rd June.  This week we have questions on the legitimate access and mining of social data, permission to analyse new data sources. Russian trolls, and trust in algorithms.

 

Should Researchers Be Allowed to Use YouTube Videos and Tweets?

Is it possible for a computer to guess what a person looks like from their voice?  Turns out that it can and with some degree of accuracy.

Researchers from MIT set out to investigate what a computer can guess about a person’s appearance from their voice.  To do this, they trained their model using a dataset called AVSpeech, a selection of YouTube videos originally compiled by Google researchers for a different project.  We all know that public social data can be legitimately analysed but one of the (unknowingly) study participants was able to identify himself in the research.  This has led to a lot of conversations about whether researchers should be able to use social data in their research – what do you think?

 

Amazon’s Helping Police Build a Surveillance Network with Ring Doorbells

It’s not just social data that holds opportunities for researchers to better understand human behaviour, all connected devices receive and send a myriad of information.  That data has mass untapped potential, but it requires consent to access and process.

However, a recent article at CNet exposes law enforcement using Amazing Ring devices where device owners might not know they are using the data/footage or know that they can say no to the data being taken.  We can’t help but think if all this data is being recorded it will always have individuals and organisations looking to process it (with or without consent) – there is too much knowledge, and therefore, power hidden in the data.  The data essentially offers too much potential not to be accessed as we can’t help but think law enforcement, policymakers and governments will be drawn to its power.

 

Russian Trolls Experimented with Different Methods to Maximise Political Disruption

Across at Engineering and Technology, they report on a new study that describes in detail how employees of Russia’s Internet Research Agency (IRA) experimented with different methods in the run-up to the election of President Donald Trump in 2016.

The analysis shows that the IRA trolls were able to use “innocuous hashtags” to inject themselves into broader Twitter conversations, with tactics and methods changing over time.  Read the full article here.

 

Can Algorithms Help Us Decide Who to Trust?

Those of us who work in social intelligence are no stranger to Artificial Intelligence (AI) and its promise to help us do better analysis.  But we also know that the insights from our current AI are observational.  In other markets AI and algorithms are being used in organisations to manage business processes, hire employees, and automate routine organisational decision making – reports Cremer et al across at Harvard Business Review.

This is an interesting article that looks at human trust towards autonomous algorithms.  The authors also question can algorithms help to compile trustworthiness profiles of individuals and organisations when AI doesn’t possess a “social” skill. They argue that this is an important question to ask because:

“Trust required socially sensitive skills that are perceived to be uniquely human”.

This question is akin to some of the arguments when we discuss AIs ability to understand human conversation and behaviour from unstructured social conversations.


Get our weekly social intelligence update straight to your inbox every Sunday.

Subscribe Now
The SI Lab Editor
The SI Lab Editor
Content shared by the editorial team at The Social Intelligence Lab. Submit your stories at writefor@thesilab.com More content by

Want to write for us?

Interested in writing content for us? We'd love to hear from you.

Write for us