Sunday, November 29, 2015

Human Senses

What we have done.

(1) The technology behind our concept
We managed to get our concept really clear, and changed it’s name from Sensory to Scent-O since we want to focus more on aromas. The next step was to discuss in more depth of how Scent-O would work:

How should we allow our users to add scents. How would they be able to adapt and create them? How would they be played? Would they „capture“ the scents like you capture an image with a camera, or would they choose a type of smell (like rose) and this would tell the device to play a certain chemical combination of this aroma? 

All these questions required more research about the oPhone technology. Finally we agreed that Scent-O should include:
  • Obrary, a library for digital smells (kind of like iTunes for mp3)
  • Scent-O Bulb, an editing tool for the smells where users could adjust the intensity of the aromas, combine different aromas and set for how long a certain aroma should be played.
  • Scent-O would allow anyone to publish their aroma-enhanced stories on the Scent-O platform. Free accounts would have to be published publicly while paid accounts should allow to keep the publications private, password-protected or allow for embedding in external websites.


A big question was also whether we should include pictures, videos and/ or VR. On one hand we wanted to really focus on smell, on the other hand visuals are becoming more and more important. And aromas themselves can obviously only tell a story if combined well with words- and pictures. Finally we agreed on including pictures, videos and 360° videos, but not VR. 

(2) The article
We outlined our article, including a future scenario to show it’s applications - but also its drawbacks! Because we do want to make clear that Scent-O would not solve any problem and automatically produce better stories, but it will be a tool for it. It will still depend on the ability of the journalist to use this tool efficiently and create compelling stories.


What we are doing.

We are now finishing the article and producing some visuals- a logo and possibly a mock-up of the platform.


What we are going to do.

Finishing the article and then start off with the final presentation.

Friday, November 27, 2015

Big Data



What we have done.
During the last week we met with Malin, to present our final concept. We got very good feedback, and from that we refined our concept into a final product. later during the week we worked with this product, creating mockups, flowcharts, scenarios, personas etc, a lot of information that will help us present our idea.


Our current ideaIntroducing Data Tale, named Dale, a storytelling application that creates stories by utilizing big data. Dale analyses your preferences (genre, fictional level, favourite author, length of story etc.) and defines keywords and important parameters to be able to collect relevant information from various big data sources. Dale then creates the story you want, at the moment you want it.


The next step
Our next step is to finish our article, which we are working with during the weekend.


Challenges encountered
as with any product creation, the challenges we have encounter is to define all the small details that Dale depends upon.


Changes in the project
The changes in this project is that we changed our idea (again) but this time we confirmed this as our final idea.


Interactive storytelling group

What we have done
This week we have focused on writing the book chapter, making a manuscript for the presentation movie and come up with a logo/write the text for the web. We have also come up with a name for our concept!

We have discussed different business models and how to apply them to our idea. We still have to decide if we have a business to business concept or a business to customer. Further, we have discussed SWOT to analyze our strengths, weaknesses, opportunities and threats on our concept. We have also in the group divided the work for the book chapter and we are aiming for finishing it this sunday.      

Challenges encountered
The biggest challenge we have encountered this week is to get all group members together. Though everyone has a lot to do, it has been difficult to get together and discuss/develop our idea.

We have been doing a lot of research to build a solid foundation for our idea. We are still not sure about the purpose yet, which is a bit concerning. The main purpose of the idea is still shifting. Either it will end up with a purpose about encourage a sustainable choice when eating food or just try to strengthen the food experience in a cultural perspective. The good research we have been doing so far will definitely make our decision concerning the purpose much easier. We have been working with the report this whole week, but we still have a lot to do.

What we will do
First of all we will determine the concept for our idea and make it more clear than what it is right now. We are on the right tracks but still have some discussion to do in the group.     
Then we will complete the report and also make all the graphics to the book ready for the second of december. Also we need to celebrate our completed report with a beer and in the same time finish the script for our movie.

Resources

  1. Stephen E. Palmer, Karen B. Schloss 2010, Human Preference for individual colors webpage:http://www.researchgate.net/publication/221329059_Human_Preference_for_individual_colors

  1. Hurlbert AC, Ling YL
(2007) Biological components of sex differences in color preference. Curr Biol

  1. Ou L-C, Luo MR, Woodcock A, Wright A
(2004) A study of colour emotion and colour preference.

  1. Stephen E. Palmer, Karen B. Schloss 2009, An ecological valence theory of human color preference, webpage:http://www.pnas.org/content/107/19/8877.full#ref-4

  1. Humphrey N (1976) The colour currency of nature.


Eyewitness Storytelling

Hiya!

This week has mostly (as with all the other groups) gone towards the writing of the text, and some visual graphical work.

Since we've been writing in-depth about our service "Lit" we've finally been able to firmly distinguish which features it will have, and which ones it will not. We've also started working on our graphical profile (as can be seen in our example - the logo, below).

As of yet, we have no questions or obstacles to overcome, everything is on track. If anything would come up we would definately not hesitate to contact you or Daniel. Now, back to writing the text and continuing our work up until the hand-in on tuesday!

For next week we're looking towards starting working with a video for the final presentation, and we feel that (based on the graphical work we've done this week) we have a bit of an head start when it comes to the color palette and graphical profile.

Thank you.


Future of Advertising

What we have done


During this week we have continued to focus on structuring, writing and dividing the book-chapter between the members of our group. Starting to write the chapter has been great as we have had to reflect over, and concretize every part of our project in order for the concept to be coherent as well as the text. We have also tried to visualize what kind of vantage point our group should have approaching our idea, which we had not reflected much upon previous to writing the book chapter. Our idea is that we are a company/Advertising network that have identified a valuable solution/concept for storytelling in advertising to be more effective, based on technology that underpins future society as well as trends within advertising coming years (our scenario). We believe that this vantage point will create better opportunities for us to present our project in a more interesting and effective way.


What we will do


We have a lot of writing to do before the deadline on monday, so we will continue to focus on that. The Idea is that we after the book-deadlines are passed start to focus on our design representation/Presentation in order to be done with our project well in time before the final presentation deadlines. Also, we will come up with a logo and company name before the book deadline.

Challenges encountered


Mainly we have been struggling with how we look upon ourselves in our scenario and concept. In order to make the concept more interesting and conceptualized from a book/presentation point of view, we believe that it’s important that we have a specific vantage point, which we have worked on. This have been the major challenge for this week.

Changes in the project

Other than that we now identify ourselves as a company/advertising network provider that understands that we can create an advertising solution that empowers storytelling in advertising through emotional data (our already defined concept), we have not made any changes to the project.

Cross-cultural storytelling

  • What we have done
    Similar to the last week, we have spend a whole day to talk about the business model in detail, the user interface of the platform and how to write our book chapter. This week proceeded much better than last week, because we could express our thoughts and ideas clearer. We started to write the book chapter as well as some proofreading to help each other out. We have also been much better in communication through our online channels and helped each other when writing different parts of the text to maintain a continuous dialogue and keep thoughts aligned. Furthermore, we created three mockups of our interface, which will be shown in the book and also in the movie. We have also decided a definite name of our concept/service, which is Wiews.
  • What we will do:
    The next steps for us will be to finalize the text of the book chapter. We have an internal deadline this weekend and after that we will start to proofread and check so everything is coherent. After that we will fully concentrate on the presentation and the movie we are planning on create to visualizing out concept.
  • Challenges encountered:
    As you can read in the last blogpost we have suffered from lack in communication and difficulties in understand each other and agree on different parts. As mentioned above we’ve expressed our thoughts better and it’s been easier for us to understand each other and work together. Challenges beyond that are now get everyone else to actually understand our concept and how we will make the world better in the future by writing these 16000 characters, and that is easier said than done.
  • Changes in the project:
    We haven’t change direction remarkably just made it more crisp and narrowed it down.
  • Resources: The same as last week.

Personalized Storytelling - Nuse


This previous week our group has been focusing on the details of our solution - Nuse, and we have been trying to narrow it down to make all of our ideas more concrete and clear. There have been a lot of discussions about how we want to show our scenarios and how our software that we are creating will work on a technical aspect. We are now getting closer to completing our goals and we work very well together as a group.

Next step is to create the scenarios in our animated video. We will change some parts that we showed at the mid crit in order to improve it, and we will create a better sound file and add subtitles. We will also finish the book chapter that we are working on right now.

The challenges has been to narrow it down and focus on the most important things of our solution. To try to envision the future while also think about how the technology will work in practice has been the hardest part. The only changes we have done is that we are changing the scenarios a bit, in order to show our software in a better way, and we are putting more effort into focusing on things like “fear of missing out” and the “filter bubble”.

As written in our last update, we had an inspiring meeting with Omni, but we have also researched things such as “smart homes”, “smart cities” and Internet of Things in order to grasp the future of connectivity which will be important for our solution. 


Packaged Stories (Moving Images)

What we have done

This week has been all about shooting the movie for our final presentation. As previously mentioned Canon, in their gracious manner, have been kind enough to lend us a professional video-camera for this week and thus we chosen to dedicate as much time as possible to get the most out of it.

Monday and Tuesday were spent picking up and preparing the equipment that we have been using and also, to some extent, going over the pre-production material. Wednesday was our first actual day of shooting, starting of early in the morning working from sunrise to sunset in the late afternoon. Although we did not manage to get all the scenes we wanted to, we are quite happy with the overall result.

Amazing view from one of our filming spots


Olof rigging up some camera equipment


Thursday was spent in the Green Screen Studio (hence; studio) of KTH Media Production, an administrative unit working with moving images to assist the school’s academics. Our hope is that this particular feature, allowing for a lot of interaction within the actual movie, will give our presentation some extra punch. Finally Friday is being spent, as we are writing this, finishing the last green-screen scenes.

Olof demonstrating how to act

Lucas rocking that microphone

As you might have guessed, our plan is to try and make the movie speak for most of our presentation, taking advantage of the visual qualities that a video provide to really show the audience how Packaged Stories is supposed to work.    

What we will do
We will continue to shoot during this weekend, aiming to be done by noon on Sunday. Hopefully we can also start the editing by the end of that same day. We plan on having loads of digital VFX (Infographics, environments, dynamics and similar not to mention all the GS-related work) to further enhance our movie.

We also plan on finishing our text for the book this weekend. As of right now, we have a draft ready but to get the final result we need to add more text. In addition to this we are going to create and/or decide upon what graphics we want and which quotes to emphasize for our chapter to be completely done.

We still haven’t got any answers from the scientists and industry RnD-personnel we contacted which is a bit worrying. However, we plan on making a last attempt on calling them so that we hopefully can get some valuable information to our report before the deadline on tuesday.

Challenges Encountered
As always, the greatest challenge is to get enough time to shoot everything we need. Although there are only four of us, finding big enough matching gaps in our individual schedules is hard. It is not impossible however, obviously we seem to have managed fine this week, and the result is usually that something we want to shoot has to be cut.

Another challenge, albeit of a more technical nature, has been to crank the most out of the camera we borrowed. We have jury-rigged an elaborate mixture of low-end budget equipment from the camera to a laptop and finally to an external hard drive. This particular solution was finally successful, although it took quite a lot of time to figure out. The result being that we have been able to record a stream of 10 bit - RAW, CDNG at 4K.

In other words, kick-ass quality!      

Future of Audio

This week it is much better. On Monday we met Tomas Granryd from Sveriges Radio who is responsible for strategic development of the company. This meeting was crucial for the project: even though Tomas did not like our idea, we firmly decided to stick to that and based on Tomas's comments substantially improved the idea. Now we know who is going to use the service, what problem it solves and what technology stands behind. Ukrainian member of our group posted the idea on her Facebook page and received many positive comments like "where can I download this application?". Of course, this is not real prototyping and of course, people in Facebook may be biased. But anyway!!:)

Then we developed a name for our project and logotype (see above). We don't have designers in our group (nor people who understand audio technology:)) but creation of logo took probably couple hours only: it is easy to make when you know how to google necessary free services. And yes, no plagiarism, of course.

Today, by Friday morning, our text for the book is almost ready. Only some editing/revision necessary. With pictures and 15900 characters in the text.

So, we started the project very bad but seems we've managed to catch up other students and even outstrip some of them. Looking forward to making audio, video and presentation! It's gonna be fun:)

Point of View

What we have done
Last week we finally decided on a idea. Our new idea, and future scenario, can be summarized as: In a world where the information flow is increasing at a rapid pace and personalized content is quickly becoming top priority, the need for awareness and transparency is more important than ever. Sentiment is a service for exploring stories presented in video format from different points of view. It enables the user to instantly recognize the position of any given video in a “point of view landscape”. This position can be used as a starting point to further explore video content of different points of view, providing the user with an intuitive tool for experiencing the same story from several perspectives.

Sentiment1.png
Sentiment logo
This week we have continued to study and explore what technologies are needed in the future to make our idea possible. Since we last week changed the concept completely, we had to make up for it this week. Therefore we have worked extra hard on exploring the technologies that exists today and what you actually can do with e.g. NLP, sentiment analysis and text analysation. We have also started to work on our text to the book chapter. We did a lot of research in the beginning of this project on what the concept point of view is and the different definitions of it. Due to that our background on the subject and future scenario is more or less finished, which is nice now because we are a bit behind on the technology. So our main focus this week has both been to explore the technologies as well as re-writing our old text to work as an introduction in our chapter of the book. We have also decided on a company name, Sentiment. Sentiment analysis is refers to the use of natural language processing, text analysis and computational linguistics to identify and extract subjective information in source materials. Hence, it makes a perfect name for our future company. This week our company’s logotype have been created as well.

What we will do
We will continue to work on the text for the book as well as getting started on creating a prototype of our product. We will also start working on a storyline for the movie we are making for the final presentation.

Challenges we have encountered
So far so good at the moment.

Changes in the project
We are proud to say that there has been no change this week.

Resources
We feel that our project idea is great but in order to demonstrate its potential and make it credibility we need to use examples of (news) videos that are good. Hence. we decided to contact Anna at SvD, who we have been in contact with after our mid-crit session, to ask for some advise on new stories that we could use in our final presentation video to demonstrate our concept in a clear way.

Other
-

Thursday, November 26, 2015

Virtual reality w.48

This week has been a week of work, and not as much discussions as the previous weeks. We have mainly been working with the book chapter that we will submit on tuesday next week. Last week we decided all headlines and what we should write under every headline. We divided some of the writing and created an internal deadline for when we should be finished with our own part respectively. We set the deadline to friday (tomorrow). The other parts that we didn’t divide was done during meetings this week. So our plan was to have a finished chapter on friday and that everyone read it through and write notes about things we should change. We then meet next monday and finish everything.

We have started to discuss the movie that we will have in our final presentation. We have discussed possible scenarios that we might capture and the technology difficulties and how to do it smoothly.

About the validation that you were not convinced about. Yes it is a two step validation. It is a validation method that we have copied from youtube and other social medias. In a sense we are a news company, but we are also very similar to youtube. We can’t have the same validation as regular news companies since we will have to process a lot of news since we are crowdsourced. So in order for us to process that much content we had to look at validation models used by services with similar problems and by the validation in two steps is probably the easiest solution. The users report content that is not appropriate and then we, as a second step, validate only the content that has been reported. We will however provide some guidelines for how the news should be presented and things that the providers of the news should think about. And regarding false news, the community will report suspicious news and then we validate it. The big benefit of relying on the community is that it save us a lot of time.

Wednesday, November 25, 2015

Computer Games

What we have done this week:
The biggest thing we have accomplished this week is that we wrote 12000 characters for our book chapter including background research, future trends, explanation of concept, story and game play. We are explaining our game design and goals in the game. We worked with the "real time" concept and decided this was something we should use in our concept. For example, in our concept, you have 2,5 hours to catch the evacuation buses. However the game does not have to end if you miss your buss. Our game is using a network of game play with alternate stories and endings. You are free to go wherever you feel like, and the actions you take will influence the story.

Investigation of Stalker game is done, we did not find it useful as a source of inspiration since we found the plot was weird, storytelling was kind of weak and the graphics were not all that great. However, we saved some print screens from the game that we might use to illustrate our concept.

We have E-mailed Daniel Ström from Gurugames with some questions, for example, about his opinion about storytelling, game mechanics, future topics in computer games. We'd like to use his answers as quotes in our upcoming book chapter.

We have been thinking about using a Beatboard or a storyboard instead of cartoons or a movie to illustrate our concept. 

We did a quick investigation on if other games uses "real time" as a major thing in their game design but only came across games belonging to the genre of "real time strategy" games which is not what we were looking for.  


What we will do next:
We still have writing to do for the book chapter. The text as of now is just a draft and there are several things we after discussing it feel like we have to cut down on and add. We still have not found any images to use for the book, however we have made a collection to choose from.
 
We are waiting for a response on the mail to Daniel Ström and we will see if he has any useful answers. This week we will talk to Björn again to bounce ideas, we found that to be really useful last time. 

We also need to make some explanatory graphics to illustrate our concept. 

The biggest concern we have is still to find a way of presenting our concept at the final presentation. We need to look further into options such as using comics. Our concerns about explaining our concept is rising, we have to devote more time on working on this matter. We need to look further into options such as using comics.

We came across a game called ICO which is a game that working without any interface. We found that to be interesting and we will use it as a source of inspiration.



 

Friday, November 20, 2015

Future of Audio




This is probably the most difficult thing we have ever done throughout our life. So challenging:)

We haven't moved far away from the last week, to be honest. Discussed the idea again and seems have found the technological grounding for that. Also have talked to Tony Churnside from BBC, who up until this year end was responsible for the innovations in radio broadcasting for BBC. He approved our idea, at least theoretically. Whether it would be feasible to fulfill or not, he cannot say - it's necessary to make a demo to check. Which we probably won't be able to make: too complicated.

The idea of our project is not that far from the initial one. It's based on the BBC's technology of object based technology when audio is transferred to the listener not as a whole piece but in small bunches and thus can be assembled in a different way. Here's the link to this technology: http://www.bbc.co.uk/taster/projects/responsive-radio

If you add to this technology voice recognition and artifical intelligence, then the user can ask questions and the audio can be assembled in such a way that it will answer them. In order to produce such a highly customizable audio, metadata is necessary for the initial set of audio. This all we will explain in a less sophisticated terms and language in our report.

In order to check this idea, we will meet Tomas Granryd from Sveriges Radio on Monday. We really need to approve the project by someone very deep from the industry (as we mentioned, Tony approved it, but we talked through chat and it's better to discuss the project with someone in person).

But we do not have a good plan for the presentation yet, except for the thought to make a ppt presentation that will explain how it all technically works and make a advertising cartoon for the end user (without going into any technical details).

This all seems to be so confusing and impossible to fulfill...:) I hope that other groups feel better than us!:)

P.S. Here are also some very important and useful links that will help us to write a report. Which we will start writing after Monday's meeting with Tomas. We hope.

http://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP285.pdf
http://www.mpeghaa.com/papers/Metadata_for_Object-Based_Interactive_Audio_ID36_Fueg_FINAL.pdf

P.P.S. And also, we have no idea yet how to integrate the story into our project and especially the Baltic soldiers story... We would be happy to take another one, but seems it's too late. One more huge stone of confusion:)

Eyewitness

Hi there, Mr. Blog.

As mentioned last week we’ve continued working on the concept and gone through it in more detail. We’ve decided what it is we want our concept to do, and begun sketching on the graphics and potential logos e.t.c. Furthermore, we’ve continued reading articles and delved deeper into the companies already working with eyewitness content management and distribution of some sort - to get a better idea of what it is we need to do in order to: 1. Fit into our previously thoroughly thought out future scenario, 2. Distinguish ourselves from the services that are currently available on the market - and their future visions for their corporations. We’ve also decided, based on feedback from the mid-crit, to focus more on our concept at hand rather than history and future of storytelling per se. For instance, we’re looking into the text which is to be written before the 1’st of December on how to portray our project and concept idea. Thus, we’re aiming for working in 50% about history, current state of Eyewitness Storytelling, and future scenario, and having the remaining 50% be about the concept and how it solves future problems e.t.c. The main challenge faced when determining the specifics of the concept has been narrowing it down. There’s just so many potential ideas implementable in a digital space such as this where eyewitness stories are summarized to article content. Do we want the website to have social media elements? Should users/consumers be able to upload data manually or should everything be left to the gathering algorithm? Should the content gathered be meta-tagged with user-specifics connecting it to the user who uploaded it? There are just some of the questions we’ve been working on solving throughout the week.

A large chunk of what has been done this week is related to the writing of the magazine chapter. Since we received more specific information about what the text should contain and the format of it, and that it proved to be a much greater task than was previously communicated by the project management group. We have thus started outlining what parts we should include in the article as well as distribute writing across the group members. We decided pretty early on that we want to get started and complete the writing as soon as possible so that 100% focus can be put towards making a great presentation later on.

Furthermore we would like to, if possible, ask for a feedback meeting with Malin and Daniel next week. As soon as possible, say monday or tuesday?

/Eyewitness group

Update from ”Personalized Storytelling”-group

Summary of week 47

General update
Last week, we had a meeting to discuss the feedback we got on the midcrit-presentation.  In short words, we decided that we would keep up with the plans of doing a video for the final presentation. However, after receiving feedback on the midcrit-presentation, we understood that we probably need to tweak the video we have now a little bit. For example, we need to make it more detailed and fun to watch. Also, it might be preferable to form our scenarios (which the video is based on) a little bit. It might be room for even more powerful scenarios, when looking at how our application NUSE can help/be profitable for people in their daily life.

Additionally, two members from our group met with one of the founders of Omni, the news platform which uses personalization, since it provides news within certain fields that a specific user is interested in.  Some points that we discussed on the meeting with Omni were that personalisation will probably be unavoidable in the future, and that is positive for our group since we chose to continue to work with personalisation, instead of working against it. We also discussed how filter-bubbles and personalization does not neccessarily have to be depending on each other. Personalization neccessarily dont generate filter bubbles, as long as it exists contect that challenges the user.

Our current idea
We are actually pretty satisfied with the idea that we pitched on the midcrit presentation. However, of course there is always room for improvement. First of all, we are currently looking into the scenarios that we presented in our video. We are done with our personas, but the scenarios might be able to become even more spot on.

We have also discussed a bit about how we want to define storytelling at our final presentation. Back then, for our project we decided to define it as storytelling in news. Now, we believe that we might have to describe it a bit more, although without it taking too much time or room in our final presentation. We are discussing how to explain it in a short and consise way, based on some insights we got from our interview with Omni.

Descisions we have taken
This week, we have decided to work on the video even more. We might need to change the scenarios a bit.

Next step

Next week, we are going to meed again. We will continue to work with our presentation video. We might discuss and map our process for this projct a bit. Additionally we will start writing the report.

Virtual reality

We started the week with a meeting at tuesday with Malin and Daniel about our new approach in order to get some feedback. It is always good to get a second view of the concepts and try to highlight the problems that might occur with our new approach.  They liked our new approach but some questions was asked about how the news should be captured and we believe that in the future it will be more common to capture moving images in 360 degrees view than it is today. Today that procedure is kind of expensive but in the future it might be like a hardware that you plug in to your phone in order to be able to do that. So we don't believe that will be a problem.

We also discussed the mail that we got from Anna Careborg about creating a virtual reality reportage for SvD and it might be hard to get that done before our presentation in december since a lot of work has to be done by then in this course and other courses as well. But we said that we might do it until february.

The question that we published in our previous blogpost have we been trying to answer. And about the validation of the news we think that the validation of the news will be done by the community, both by rating and by reporting inappropriate material. We don't think that we, as a company, should validate the material since we are a crowdsourcing service, and we will get a huge amount of news scopes from people and if we need to validate all of them before they get published it will not be manageable for us. We only validate the news that get reported by the community and by doing that we save ourselves a lot of work, just by believing in the community and their potential, kind of like youtube does today. About the editing of the clips is kind of the same. We want to give that responsibility to the users. We receives already edited virtual reality scopes and then distribute it.

We have also solved a few unclear questions about our concepts such as our view of the service's position on the market compared to other competitive companies. We see ourself in the middle of youtube and big news companies such as CNN, BBC etc. We are kind of like youtube in the sense that we are crowdsourced but we only focus on news which makes us like CNN. By combining those attributes of our service we create a whole new market, news from the people for the people.

Another unclear question that we tried to answer was about the interaction with the service. When we presented the new approach to Malin and Daniel it felt like Daniel thought that we would make it possible to walk around in the virtual reality and that would be the interaction. The reason why we don't want the users to be able to walk around in the VR is because there is no real natural way to make that interaction possible, except if you have access to a unlimited huge room. The interaction we have decided to go for is like you can chose to who you want to listen to and things like that. Imagine being able to listen to a interview with the person that you feel is interesting instead of listening to everyone that the reporter has interviewed. We are thinking of a scene where there are people around you and you can pick who’s story you will listen to. The way you interact is still under discussion but we imagine the VR goggles will contain some sort of buttons on it that you can interact with, but maybe eyetracking will be better (http://www.getfove.com/)?

We have also decided that the service won’t be subscription based, it will be totally free. This because we want as many users as possible in order to get as many stories/news--scopes as possible.

One of the inspiration that we got was from Malin that showed us this:
It is pretty much how we imagine the result to look like.

  • This blogpost ends with a question. What do you think about our validation solution? The validation of the news will be done by the community, both by rating and by reporting inappropriate material. And we, as a company, only validated the content that was reported.