Friday, November 20, 2015

Point of View

What we have done
Since last week a lot has happened within our group. The week started with that we continued to work with our new idea i.e. semi-automated animated movies for children with the purpose to introduce the concept PoV in an early age. We have had a short meeting with Björn Thuresson about our idea. Björn told us that on a technical level our idea is fully possible to do. We were right about using voice synthesis and that animated movies is data with vectors which makes it possible to animate new content very easy. However, he told us that the Audio-, and the Interactive storytelling groups are working on very similar ideas, i.e. choosing a storyline before you start watching. Björn also raised the issue with that our idea isn’t a new concept. In 1950 there were movies produced which told the same story but from a different PoV. But since it was in the 50s, they produced x different movies which most have been fairly costly. We also talked about that it has been movies more recently that explores the same concept as us, e.g. Crash and Babel and even TV-shows where you get to follow the same story from different perspective (e.g. Game of Thrones). This lead to a discussion about how to take our concept farther. What is really storytelling and a story? What are the components? A story, today, is more or less linear. It has a beginning and an ending. What if we could take our idea with x different PoV where you get to chose your own storyline. Imagine a sphere where you can choose different paths between different points (which are the different PoV). In the end it just ended up being a very interesting discussion but it felt more like an idea what would have been great if we had to explore the pure concept of storytelling and not storytelling - PoV. This week we have also had a meeting with Malin discussing our new idea and our old idea. She seemed to like our new idea but she liked the old one even more, especially the very first one when we were thinking about using  Google cube for news reporting.  

Challenges we have encountered
Mentioned above we had an issue with that two other groups might be working with more or less the same concept as we do. So we contacted the groups and they did not work with the same concept as we. The interactive group, however, had a very similar idea like our first (PoV in text that are presented more or less automated to the reader). We also have a smaller issue within the group. It seems like half the group likes the idea with animated movies a lot and the other half is not as convinced. Hence, we have an ongoing discussion with if we should change back to our initial idea with text-PoV. However, we struggled with coming up with a solution that is “future” as well as fills a good function within the PoV and text area. But yesterday (Thursday), we had a breakthrough.

Changes we have done
Since half of the group didn’t like the animated movie idea a lot we had yet another brainstorming meeting. On this meeting we went back to what we talked about in the beginning of the course. e.g. the Google cube. We decided that it’s important the we focus on people that actively wants to be more aware of their filter bubble and PoV. We think that in the future more people will be connected and many Internet applications will be based on your search-history, presenting information to you that it think you would like. Hence, the concept of a filter bubble will be even larger than today. We also decided that video is the future and not text and that we want to visualize this complex issue for those people, focusing on some sort of video.

Based on this we came up with an idea which is a plug-in on video. So when you, in the future, watch a news story on e.g. YouTube, our plugin will be displayed up in the corner. This plug-in presents you with a visual representation of the PoV of that video. Based on the video, a x- and y-axis are generated where your video is plotted together with other similar videos on the Internet. If you want to explore the same story but with a different PoV, you just click on the map. Hence, the user gets both more aware of their PoV as well as it provides the user with an easy to challenge and broaden their PoV. Our plug-in analyzes videos on Internet and create a coordinate system based on the video’s content, and plots both that video and other videos (see image below). So, in short:

  • Our plug-in analyzes different videos on the internet. Our focus is on news but it is able to analyze every other video on the Internet.
    • This is done by NLP, video/text translation, text analysis, categorization (with big data and machine learning)
  • You as a user watch a video about a news event in Stockholm
  • Our plug-in presents you with a map with 2-3 different axis based on that videos content.
    • The “matching” of different videos coordination system is based on a mathematical formula
  • You as a user sees that you have only seen this particular news-story told from a Western-Right winged PoV. Since you want to broaden your perspective of the world, you click on another video on the other side of the map. 12248796_10153643136170568_933007009_n.jpg'

What we will do next
Next step is to make our idea waterproof when it comes to the technical aspects. We need to do more research on how computer understands language, we need to find sources on that we can translate audio to text and also that you can analyze the person speaking and the emotions that person has. We also need to do research on big data, machine learning and the categorization. Another thought we had is also to come up with a mathematical formula that enables us to put videos with different x- and y-axis into the same coordination system.

1 comment:

  1. Wow! What an interesting turn of events. Now we are back to the video, news and a really cool plug-in. Yes, I like it! Now, you have to be very well-organised in order to meet the deadlines ahead. But you can do it, I'm sure! :-)

    ReplyDelete