From the outside looking in, design (and for this focus — product design) seems pretty black and white. A design now exists where before there was none. There’s an app on a person’s phone that wasn’t there before. A person interacts with a desktop application to get info that was once not available. Where there wan’t a helpful product before, there is one now.
For those working on the inside whether they are developers or designers know it’s not simply a flipping of the switch or following a number of predefined steps to create something that wasn’t there previously. A lot of the focus and activities for a designer are an exploration and connection of elements for a certain purpose. The catch is that a lot of the design is an abstraction of the final delivery that a person will interact with. One one side of the spectrum are the challenges of live data whether it’s info people are sharing online, geo related info or combinations of other api’s. On the other end of the spectrum are how people will want to interact and view the information based on their experience of what is laid out in front of them.
Over time as technologies come online, information is displayed and understanding of how a user reacts dictates the action. When talking about design in the abstraction, it is important to be clear about where in the process this happens to be. It’s possible that someone that is hearing about “design in the abstract” is thinking about the final product at their fingertip while I’m refering to it as a starting point before things go live.
Of course even “final product” is a bit of an abstract term. Products should be inherently improving over time.
If there isn’t a focus of evolving over to be one step ahead the product runs the risk of decaying. Steps need to be taken to understand how people are using the product. There needs to design benchmarks to measure if the experience is getting better through addition, subtraction and updates of features. Along with understanding how people are currently using the product, hypothesis need to consider how they will be better served in the future.
As new elements are added (or removed), older functionality, displays and interactions should be reconsidered. Simple questions about their usefulness need be asked and proper action should be taken if the answer is no. While the elimination of certain features that aren’t as useful, it’s not necessarily another black and white issue.
Features and elements that aren’t helping enhance the overall experience shouldn’t always be removed immediately. Having those elements in front of a developer sometimes motivates them to improve their service faster than if the element is taken offline and placed in a ticketing system such as Jira. Those decisions are case by case.
With all that said — displaying sub par experiences to a person using the product is never respectful of their time.
There’s a lot of different approaches to getting to the product’s promised land of perfection and efficiency for the person that will eventually use the product. Instead of laying out and comparing the strengths and weaknesses of some of those approaches, it might be worth asking a couple questions. How to manage time during a release? How to make the most of the team at hand? How does the person interact at all stages of the experience (first time, after a couple days of usage, and growing with the person using the product as they move from novice to expert). How do these things change when considered for an entirely new product, new feature or improvement of a current feature? These all present unique challenges and approaches depending on all of the above.
Working backwards through those questions, what’s the product type? Starting from the ground up let’s suggest a new product in the raw that is just bare-bones should take a couple months to get usable and testable. Working within the idea of units as weeks, 12 units seems to be reasonable set to build an atomic unit, settings and mild interactions. That’s roughly three months. New features to build off of could be 6 units (weeks) while an improvement of a feature 3 units (weeks). The idea is that over the period of a year there’s there’s three significant upgrades to the product. Again this is an abstraction of reality which might not be the fit for another group of people building.
So what is happening during those units of time? Looking from the top down, there’s an equal balance of development, design and testing running in parallel and together. Whether it’s a new product or iteration, the differences are time and information that is known vs blindspots where hypothesis are made. Connecting the unknowns to the knowns with both design and development within the units of time drive the abstraction into something real.
As users start to engage with the product, another level of design and development is introduced. How does the product work for a person using it for the first time? How does the novice person get comfortable over time, and what are the steps for the product to evolve and grow as the person becomes a power user?
A lot could be said with an onboard that isn’t overwhelming to new person having never used the product. Even more important is making it understandable about the product will make them better than if they didn’t start using it. If those things are done successfully the product has shot at being used a second time.
As challenging as it is to convey why the product is important there needs to be enough reason to come back again to use, and use again over time.
As a product matures the challenge becomes twofold. How to keep the product useful as technology evolves while staying understandable to people using the product for the first time. Products tend to get more and more complex as the core group of users mature with the product. Unfortunately for people experiencing the product later in the cycle for the first time time it can be overwhelming. One strategy is giving the expert user more control of editing what they see and how. What is difficult to do is to give these controls at the right time and not become a barrier for someone that is just learning about what they can do, not in the abstract but in their current moment.]]>
To understand what inverted search is, it’s helpful to define what search is. In search’s simplest form the user has an idea of something that they are looking for. They type in a static query on a site hoping they get lucky. They are likely trying to pull some info that is just past their finger tips.
Inverted search is about designing a system that pushes out info that a person may not have been aware that they needed to know about. It’s a process of sending things dynamically at the right time, through the right delivery systems and on topic.
Other factors such as time and past events creates context to makes the info appropriate to take action with. The challenge with inverted search is that there needs to be some sort of setup ahead of time to define a range of interest. It’s tough to create a system without any starting info. Equally difficult is asking someone what their unknown interests are.
There’s a couple approaches for the system to extract the appropriate information to define boundaries. One is to set up a range for a number of topics. For instance if someone wants information on weather and how that is affecting travel, chances are they might not be interested in sports for the same issue. If the person can turn off that topic they won’t be sent info they have no interest in.
There’s another option though. What if the user can’t really define what their interests are going to be? In fact they don’t want to miss anything and want to consume everything. It’s an exploratory phase that allows the user to gauge the amount of information coming in, and what they can do with it. As they receive the info based on the delivery system they’ve chosen they can understand what range of topics they’re interested in.
The challenge is to be as informative as possible with the info they are getting in terms of why they are getting this. The second challenge is to make it easy for the user to modify a particular topic by either increasing the amount or decreasing the level.
The system also has to define a threshold to pass along important info on a topic even if the user has decided not to get information.
Trying to define a user cycle in terms of novice, intermediate and expert is difficult within the context of inverted search. Someone starting off may be overwhelmed with too much info while an expert may have taken the time to fine tune the system to fit their needs in a way that the novice hadn’t considered.
On the flip side the user just starting out may select a narrow set of options that doesn’t deliver many (or any) results. The person that has spent a lot of time within the system and understands the nuances has probably tweaked their thresholds. The design challenge is to turn that novice into that expert in a short period of time.
Another consideration is how the user will want to see their info. Along with delivery is timing. Important info doesn’t necessarily happen during normal work hours. Along the same idea of timing is quantity of info. Depending on the importance people could want a certain level of importance sent to their phone while info that meets another threshold is delivered through email.
All these levels of delivery need to be considered in a way that the user can understand, modify and adjust when needed.
I can’t seem to sit still when I read. No matter where and how I’m reading I wish it was on a different screen. While this isn’t always the case, the flow follows something like this.
When I start reading from a desktop I want to read it on a tablet so I can get away from my big monitor. When I start reading something from a tablet I want to read it in print, usually in a magazine format. When I start reading something in print I want to read it on a phone so I can have it with me anywhere. When I start reading something on a phone I want to read it on the desktop so I can add something from a keyboard.
There’s a couple things going on. First I’ve turned reading into an activity. I want to be able to do something with the content that the original format doesn’t provide. The second thing is that I almost always find content at a time in the day when I wish it was on a different type of screen.
What is kind of helpful is that any of the data connected to a screen can be pushed to a different format if I choose.
Sending stuff from the desktop to tablet can be done pretty seamlessly. The same could be said of starting on the phone and ending on the desktop. Print is the kink in the system. If I discover something on my tablet, if it’s not from a magazine I subscribe to, it’s really hard to make the print version.
A lot of my reading is as much about getting something as it is doing something. That connection for me is all about learning, and filing that info as knowledge for future reasons. Having the content flowing from one screen to the next helps continue that process.
There’s a couple distinctions between the type of content that I’m reading. It’s likely to be a post from a website (or article from a magazine), or something from a book. Outside of that it’s typically commentary about one or the other. Articles can be saved pretty easily whether it’s from something like Pocket that saves articles. More and more sites also make it easier to save articles within their system. Books tend to be read from the kindle app so I can highlight stuff for future reference. Print again is the kink though if I really want to reference something I can use Evernote much like I would with my digital archives of things I want to make note of.
There’s making note of something and there’s the process of making it easy to find something that had been previously read.
It’s still a work in progress for me. If I don’t make a note of it on Evernote it will take some time to find because often I’ve just saved the article in the ecosystem that I read it. I tend to remember the content more than what device I saved it on. Searching online helps but it tends to lead to me still wanting to figure out where I saved it inside my own system.
I spend a lot of time with Evernote and think they have a pretty much covered the area as well as anyone. It all starts with the note where I put some info down. That can be a photo, link, article or almost any other file format. I can title the note, and a date stamp is attached. After making the note I can tag it with words and I can include it in a group by creating a notebook. Over time as the content collects and sorting to find something takes time, I can search. The beauty of Evernote’s search is that it can search flat images, not just claimable text.
While I’m not writing that much at the moment, usually I’m saving something because its struck a note with me. From there I may want to do something with it in the future. It could be for a post, it’s usually an idea that I think might be worth connecting to something else. Usually connecting ideas for means that it’s the starting point of something that is going to be designed. Designing usually means defining a system that will live and flex over time. What I’m saving are ideas that might one day be part of a system.
Moving from one screen type to another takes time. When time collects it becomes significant. Time away take away from focus. Is that time gained when trying to pull up old content — maybe. Being able to move content around also gives me comfort. Being comfortable allows for better focus. It’s a bit of loop, what works for one person might not work for another.]]>
For many years now I’ve had the tradition of going to the top of the Empire State Building on New Year’s Day to focus for the upcoming year. It gives me a chance to reflect on the past year and for what’s ahead. Taking a boat around Manhattan may have started a halfway checkpoint during the year. On a calm summer night I floated around Manhattan with my girlfriend Peggy. The weather was pretty close to perfect. There was no humidity, the water was calm and the sun was setting on the city. We headed south from 23rd street, past the Statue of Liberty and turned around under the Brooklyn Bridge. As the city passed by from various angles I was reminded how fortunate I’ve been for the past year between relationships, work and personal health. The remainder of the year for me is about to go into hyper speed as fall approaches so it was a welcome reset.
The past couple of months have been pretty interesting around Dataminr. Below are some of the recent articles that have been published.
A startup is finding valuable information in the Twittersphere. Dataminr, a New York startup that analyses the 500m or so tweets sent out daily, goes from strength to strength. Founded in 2009 to scour the Twittersphere for important events and news not yet reported by the mainstream media, the firm now has dozens of customers in finance, the news business and the public sector. In January it and Twitter struck a deal to provide alerts to CNN. In April its tracking of tweets was part of a strategy by the authorities in Boston to avoid a repeat of last year’s terrorist attack at the city’s annual marathon…
Dataminr is one of a growing number of firms built on analysing data from Twitter, though most do not have its focus on real-time news alerts. “Dataminr’s technology is very advanced; every day there is another example of how far ahead they are,” says Vivian Schiller, a Twitter executive.
A startup in New York City called Dataminr is using Twitter to potentially save lives by detecting news faster through analysis of big data. The company has been the first to generate news alerts on a number of events that had the potential to save lives including tornadoes, explosions, forest fires and various shootings. In March, when a gas explosion destroyed a large section of a Harlem block in New York City, Dataminr verified the event had occurred within 200 seconds. It took major news stations over 20 minutes to get hold of the breaking story. In April it helped Boston authorities in a strategy of live-tweet tracking to avoid another attack on the city’s annual marathon.
Dataminr used its big data connections and created an algorithm that saves time by understanding the interrelationships in the everyday lives of Twitter’s users. “People are sensors out there and if you look at our archive over time there’s a flow and pattern for every story there has ever been,” said CEO Ted Bailey. There are over 500,000,000 million tweets a day and Dataminr has access to all of them. They take those tweets and then score, tag and classify them through a number of categories including user reputation, user interests, topic density, user influence and spam quotient, among others. Then that information is clustered together by even more factors such as location, momentum and the novelty of the information as well as the co-occurrences of information.
Twitter has firmly established itself as the global 911 as well as a faster, more local CNN, a place to call for help and shout the news – a place where people flock when they need to know things fast…
The New York-based data analytics startup Dataminr offers an early example of this. With nearly $50 million in funding, the company sifts through tweets to identify newsworthy events. In March, when a gas explosion decimated a section of a Harlem city block, Dataminr confirmed the event had occurred within 200 seconds; it took local news stations 20 minutes to cover it. Because the startup excels at pattern recognition, it can quickly verify which information is accurate — and which, like the hacked Associated Press Twitter account that broadcasted two explosions in the White House, is a hoax.
At the intersection of big data and real-time information, Dataminr Chairman and CEO Ted Bailey told CNBC Friday that his start-up aims to make sense of the disparate Twitterverse to give finance clients a profitable edge…Using the recent siege on Iraq’s largest oil refinery by militants as an example, Bailey said: “We were able to detect that and alert our clients on Wall Street to that conflict and its potential disruption over six hours ahead any single other source they had access to.”
The crux of Dataminr’s contribution to crisis reporting, and what sets it apart from apps with similar filtering potential, is its contribution to verification. Identifying the source behind an event is imperative, given the enormous amount of tweets that could be sent about it and the countless examples of errors made in the past.
The app not only delivers information on the spot, but includes a thorough analysis based on a user’s preferences. Through its analytical module, Dataminr can determine the source behind an event, enabling journalists to “replay how that event broke to identify potential sources that were on the scene”, explains Dataminr’s Dan Bailey. Here, Dataminr can be particularly useful for the human aspect of the verification process.
When Boston officials decided to monitor Twitter during this year’s marathon, they didn’t scan the site’s 500 million daily posts for signs of trouble. Dataminr did that for them. The company’s software sorts through millions of tweets for clues about major events or emerging threats, flagging mentions of everything from fires to suspicious packages and sending real-time alerts to customers. Dataminr has been quietly working with public safety officials in Boston and three other cities with the aim of detecting potential criminal or terrorist activity bubbling up on Twitter before it happens.
Goals are a rallying point in a soccer match as I noted in my prior post “People tend to be more creative with spelling goal when their team scores”. Yeah that’s pretty obvious but how people see that goal is seen offers a bunch of different vantage points. For me I tend to look at the back of the net to see if it moves, for other’s its enough to judge by the roar of the crowd.
While the World Cup in Brazil is just starting I think it’s safe to say Van Persie’s first goal against Spain will be mentioned as one of the best of the tournament. His goal also serves as a great example of peeling back the goal to see how it happened.
There was number of angles of the actual goal though I think this animated gif shows how truly amazing the goal was between the distance of the pass, the timing to make contact and arc of the ball going over the goaltender.
Nice, but where did he actually make contact with the ball?
Looking at the diagram that ESPN made shows where inside the 18 yeard box contact was made. Comparing the gif to the diagram the goal becomes even more impressive. The simple diagram demonstrates the level of difficulty of the goal. From that distance to accurately place the ball in the net fails more times than it succeeds.
So how did the goal come to be?
As bad as the NYT can be for writing about soccer culture (See Deadspin’s article The New York Times Turns Soccer Fandom Into A Trend), they’re take on the actual World Cup is quite good. They’re posting info about each match in near real time. The above diagram was posted moments after the goal. Unfortunately there’s no anchor links inside the post so you’ll have to scroll down a bit to see the chart at nytimes.com/…/worldcup/spain-vs-netherlands
What should also be mentioned is that I saw all these clips via Twitter as the match was going on. Each of those pieces of data adds another layer of what a great goal Van Persie’s really was.
van Persie's wonder header is a big hit on Social Media.Dutchies are imitating the landing and calling it #persieing pic.twitter.com/L0IQzrMBBo
— Dutch Football (@DutchftLeague) June 15, 2014
I love watching the World Cup almost as much as Twitter and seeing what people have to say. Examining the data of both at the same time seemed like a no brainer. While my observations aren’t really scientific in any way, looking at some key words during the first match was hard to pass up.
With the eyes of the world watching the first match and tweeting at the same time I was curious to compare tweet volume of a couple words. Goal seemed like the logical choice as a go to marker of the event. Looking at the graph it’s easy to see when the goals of the match were scored. But how about variations of the word goal? For the last World Cup I never spelled goal correctly. Almost always I’d add a couple oo’s when I was excited. So I decided to check a couple more spellings.
It was intriguing to see a larger spike for the second peak of “goooal”. That was Brazil’s first goal. Of course ref followed the same time as goal with some controversial calls. What does this mean aside from the obvious? People tend to tweet the same thing for an event. The wrinkle is that perhaps with soccer people get more animated with the goal words when the team they’re cheering for scores as opposed to just reporting the obviousness of a goal.
For the sake of this post I added some great visualizations from Four Four Two. They display a lot of data that is fascinating to dissect. Below is Brazil on the attack compared to Croatia. It really puts the team philosophy and strategy into perspective.
I’ve been collecting a number of review books over the past couple of months. I’m hoping to have full reviews for most of them before the end of the summer. With that said I wanted to jot down a couple initial reactions.
Of all the books that are grouped here I can’t think of one that is more applicable to me. I’ve been shooting for quite a long time. For the most part after sharing them I’ll cluster relevant images into a post. While doing that I’ve often thought there’s got to be more to this. Publish Your Photography Book looks like it came at a perfect time for me. I’m hoping that I’ll be able to learn about publishing outside for just posting to Instagram and Flickr with a bit more purpose.
Of all the books I have to read, this one has been at the top of the stack for quite some time. I’m not far enough in to make comparison’s but it looks like a nice refresher of A History of Graphic Design by Philip B. Meggs. With that said I was a bit surprised that the list doesn’t have many contemporaries that are 35 years or younger. It tends to lean older and with familiar names. Given that any list will miss out designers as Paul Shaw notes in a recent AIGA Medalists, Graphic Icons: Visionaries Who Shaped Modern Graphic Design attempts to refresh a history that many designer’s may have forgotten from their early years of practice. To get a taste of the book check out John’s FastCompany article 10 Crucial Lessons From History’s Greatest Graphic Designers.
I’m not sure what else is out there for digital typography books these days but Type On Screen has timing on it’s side. The book shows relevant examples (for the most part), gives entry points for future learning and is understandable. My only misgiving is within the title. Given the fluid nature of digital I don’t think the word “critical” is a helpful term.
I was drawn to Wayshowing > Wayfinding for the real world examples. Most of my days I’m considering the best way to lead a person through an understandable world through a screen. I’m interested to know how people are doing the same thing outside of the screen.
Just like the photography book, The Bike Deconstructed holds a lot of relevance to me. I love cycling and anything related to it, I’m also building out a bike. Starting with the frame I’m making a custom bike that should be ready in the upcoming weeks. Having read parts of the book while talking with the builder was extremely helpful.
As this post notes there’s a lot of design principle’s out there. What I’m hoping to understand with Make Design Matter is some relevant considerations that might be unique to the author.
The Book of Trees looks to build off of the author’s previous book Visual Complexity. Examining 11 different types of tree visualizations, this book looks helpful to anyone working with a lot of data and the need for organization.]]>
I was fortunate enough to preview Century: 100 Years of Type in Design with Dan Rhatigan, Type Director of Monotype who walked me through the exhibition. The Exhibition bills itself as an opportunity to explore the past, present and future of type in design as part of AIGA’s Centennial Celebration. I was immediately drawn to the periods (all unique) on all the walls and floor of the entire space. It sort of felt like I was in a typographic snow globe. The periods also help give structure to the displays. Thankfully Dan was able to guide be through the assortment of typographic collection.
Talking with Dan I was curious to hear what surprised him as the exhibition was taking shape. One of the surprising themes that emerged was of color. When we think of typefaces we tend to think in a black and white abstract way. The color story from Strathmore paper from the 1930’s was very aspirational. In terms of other discoveries looking at the Condé Nast materials it was revealed that they had an entire production center in Connecticut. They were one of the premier printers for color production of that era. They pioneered a number of color production techniques. On a different front, with the commissions of Alan Kitching took color in a different direction. It all ties into a re-exploration of what color in typography could be.
As they began looking at the archive materials of their partners that they have worked with and realized the limitations of time and resources made for a true comprehensive educational look back. But by looking at some representative collections they could look at a number of different typographic expressions of graphic design and look back at the background of the type that contributed. It wasn’t just about individual typefaces like Bodoni, Caslon, Didot, Helvetica but also multitalented designers popping up such as William Addison Dwiggins, an AIGA Medialist whom designed the typefaces Metro and Electro which were big typefaces and did work for Strathmore. Bruce Roger’s is another who designed Centaur but also did work for Strathmore and was involved with the AIGA. It’s nice to see how the nodes started to emerge through the collections.
Pentagram’s contribution is notable for their custom typography that they have commissioned for projects. Its a different side of the story about typography and graphic design to get things just right and bespoke. It’s nice to see examples of people taking something that already exists and taking it another step for their own projects.
There’s a broad cross section of industries represented in the exhibition such as public works, transportation, medical advertising, and association work. Monotype and Linotype originally built on newspaper, book and magazine publishing because of what they brought to terms of speed of production and explosion of literacy that you get from typesetting and eventually branched out to all types of design.
Pentagram has down some work showing what is possible with typefaces as software these days, which is really scratching the surface of all the places where typefaces show up digitally at this point. Again it comes back to the notion that we can show a taste of what’s possible not just in the past but currently. There’s so many ways that digital typefaces are used on the web, apps, typefaces embedded in ui’s but it is tricky to get a cross-section of those in a small exhibition because it almost becomes a device show rather than a design show. For instance Pentagram has done some animations displayed on the above wall showing what is possible with typefaces as data by going deep into the breadth of the library. They’re cycling through a number of typefaces. A second one dynamically cycles through the periods which is live code. This is showing what is possible with access to an entire library.
Condé Nast is showing on the other side of digital with their online platforms. They’ve chosen to display Wired and the New Yorker in all their formats. Part of the project is to digitize their entire content which is made possible with web fonts to get the right clarity and scaleability. This is where we are bringing this all forward showing what can be done digitally building off of a digital heritage.
These among many other examples will be on display from May 1 to June 18, 2014 at the AIGA National Design Center in New York City. The exhibition is free and open to the public.
Monday through Wednesday: 11:00 a.m. to 6:00 p.m.
Friday: 11:00 a.m. to 5:00 p.m.
Thursday: 11:00 a.m. to 1:00 p.m.
*Please note: The gallery will be available by reservation only on Thursdays from 1:00 p.m. to 5:00 p.m. for guided tours.
Thursday: 1:00 p.m. to 5:00 p.m.
*Reserve your space on a guided tour through EventBrite Tours take place every hour on the hour.
AIGA members and the public are invited to view the exhibition at a reception in conjunction with NYCxDesign on Wednesday, May 14, 2014, from 6:00 p.m. to 8:00 p.m. at the AIGA National Design Center.
Additional info at Monotype and Pentagram to Present “Century: 100 Years of Type in Design” in New York City http://www.prweb.com/releases/Century/exhibition/prweb11798682.htm and CENTURY: 100 YEARS OF TYPE IN DESIGN http://www.aiga.org/century-exhibition/.]]>
I’ve compiled a group of photos that I’ve shot since January around the city. The categories grouped themselves as I started looking back. Biggest surprise going through the photos was that I thought I shot more than I actually did.
Below is my review of why Clear Sans from Typographica was my favorite typeface of 2013. For quite a few years Typographica has invited an assortment of people to talk about their favorite typeface. This year is no different with the eclectic mix of different typefaces mentioned. Read them at your own pace at typographica.org/features/our-favorite-typefaces-of-2013/
Having spent time recently focusing on dispersed levels of data, I was drawn to Clear Sans for its practical nature. The different weights between light, thin, regular, medium, bold, and even italic offer great options for both readability and contrast, making all sorts of type and numbers easy for users to digest. More and more I noticed that I didn’t have to squint the way I usually do with fonts that I tend to see used a lot for dashboards, analytics, and other user interfaces. One trick seems to be to use type in “playful” ways, set large; Clear Sans feels grown up and swings to the other side of the spectrum.
Experimenting, I noticed an efficient use of space both vertically and horizontally. In terms of line height, the short descender space of the ‘g’ and ‘p’ work to keep things compressed while not feeling crushed, thanks in part to the large bowls that keep the characters open. Letters like ‘m’ and ‘w’ adapt well to ‘a’, ‘c’, and ‘e’ and help to keep words tight (in a good way). I also noticed that Clear Sans remained readable whether sitting on a flat background or on busy imagery. The sharp angles of ‘n’ and ‘u’ also made the face stand out for me.
The sensible nature of Clear Sans makes it easy to work with. Aside from the technical attributes, it doesn’t try to feel too “techie” with irrelevant flourishes. Each letter has a nice detail but doesn’t overpower the next letter. If I use words like “efficient” and “practical” a lot here, it’s because they give a good overview of what each character is like viewed up close.]]>
There’s a post on Gigaom from Ted Bailey of Dataminr talking about how were were able to verify the sad explosion in Harlem before local news. Titled How Twitter confirmed the explosion in Harlem first, he breaks done some of the factors involved verifying.
In just the first minutes after the Harlem explosion, aggregate Twitter data revealed a lot about what had happened. People acted collectively as an on-the-ground detection and sensory network, depicting the scene with granularity long before first responders or reporters arrived.
These Twitter eyewitnesses in Harlem provided a mosaic of images and first-hand accounts — all emerging from one location in a short time. Additionally, the geo-proximity of tweets, the shape and rate of tweet propagation, and the linguistic signatures of the messages quickly illustrated the potential magnitude and importance of what had occurred — all before traditional information sources had even arrived on site.
According to the NYC Fire Department, the explosion took place around 9:31 AM EDT. Twelve minutes passed before local news reported that a serious event had occurred. It wasn’t until 20 minutes after the explosion that the first major news coverage of the tragedy appeared. A wealth of detailed information flowed through Twitter immediately following the explosion and continued throughout those first 12 minutes — and well beyond.
The graphic below shows a set of geo-localized tweets that Dataminr algorithms clustered together during these initial 12 minutes. Before any single source could confirm the event, the truth was already in the tweets. The unique pattern of tweets painted a specific picture confirming the event was in fact happening and that the people tweeting thought it was a “big deal.” The descriptions in the tweets provided raw and important insight into how these eyewitnesses were experiencing the event in real-time.
Kevin Cyr and Gary Taxali currently have solo exhibitions on display at Jonathan LeVine Gallery until March 22, 2014. Fans of visual design have probably come across both individuals work before. Kevin Cyr is known for his distinct representation of vehicles with the markings of the city while Gary Taxali is commissioned quite often for editorial design.
Prior to seeing Kevin Cyr: Right Place, Right Time, the only time I’ve seen his work was online. What I’ve always enjoyed about his work is his sense of detail to marks that a lot of people walk by in real life without further glance. Examining the tags, marks and other graphic elements becomes a rewarding exercise to look at. The markings open up a dialog of questions both about the people marking the vehicles and the city surrounding it.
It was an interesting scale difference walking around Gary Taxali: Unforget Me for me. I’m used to seeing his work in magazines and design annuals. Seeing his work at a much larger scale than a printed page was enjoyable. Not just for the aesthetics (though they are great) but for the concepts behind each piece. There’s a lot of visual play going on. Looking closely to each piece the details of printing come into view that often flatten when reproduced in a magazine. For those paying attention to the details it’s often a rewording experience.
A couple weeks ago I had the opportunity to talk with friend Amrita Chandra of Normative Design. You can read the full conversation on Normative’s blog with the post titled Normative Conversations: Michael Surtees of Dataminr.
Amrita: Dataminr is an incredibly complex tool – is there anything you can share about how you tackled the design process?
Michael: Let me try! We have 3 products but they are all trying to solve a fundamental problem of finding info. That info resides in a tweet which we typical refer to as an alert.
There’s tons of ways people find an alert, it could be a notification on a mobile device, an email, an instant message or inside a desktop application. Knowing that, I typically call the alert the atomic unit.
Our process involves trying to best understand how a user would define what is important to them and translate that into the product through settings, displays and iterations based on user testing. As new data becomes available and we interpret feedback, we iterate.
Typically we’ll work on one product for a period of time, move on to the next and once we have evolved the second product some of the new features will find their way into the old one as we start on the third.
At the end of the day the design process is based on use cases, feedback, testing and iteration.
Hopefully that sheds a bit of light.
The assumption is that the smart phone has supposedly made it easier to take a photo but is that really the case? It seems like people are pulling out their phone all the time to capture something but what are they doing with it afterwards? Are they uploading it to Instagram, tweeting or sharing it within a private network? Taking the process one step further how are they actually finding that image at a later date? For instance a person wants to see what they did on their birthday two years ago. Are they looking through their phone’s archives, using their favorite desktop program or something else? Simple questions with many potential outcomes.
I probably make it more complicated than it needs to be but I wanted to map out my process. To start with I started a flow showing the steps.
Shoot > Edit > Upload > Backup & Publish
Shoot: I’ll Usually start with using my Sony RX100 II, a great pocket size camera that is NFC enabled. Beyond the size and photo sharing capabilities the camera shoots amazing images.
Edit: After pulling my camera out of my pocket I’ll tap my HTC One and camera together to get the selected image on my phone. My phone serves two purposes, first to edit the image and secondly send it online to specific networks. A majority of the time I’ll start editing the image with Snapseed and push the image to VSCO for some minor tweaks.
Upload: Once the image is ready I’ll push it to both Instagram and Flickr. They both have benefits that compliment each other that I’ll talk about shortly.
Backup & Publish: I use IFTTT in the background. Every time I upload an image to either Instagram or Flickr it will automatically save a copy to Dropbox. I do this to backup all my images in case something happens on either network. From there I might publish the image which I define differently from just uploading to Instagram and Flickr. For me, publishing means either sharing the image on Twitter, VSCO or potentially my blog. Instagram is much more of a social network where as Flickr provides more options for archiving, search and formats.
Formats, Square vs Infinite Proportion
Most of the time I don’t think about shooting in a square format so I like that Flickr gives me the ability to use an infinite number of proportions. With that said I’ll upload to Instagram first so I can see what the image looks like as a square. When I move on to Flickr if I don’t like the natural proportion I’ll use a square.
It’s fascinating to compare some square images to their natural format. Sometimes it’s better as a square, sometimes it’s the other way around.
Proportion formats is but just one feature of an application that allows a person to display an image. From there I looked at a couple other features and compared them with Instagram, Filckr and VSCO. I broke out four prominent features and ranked them as low, medium or high in terms of benefit for each service. Archives refers to how easy is it to find an image after a period of time. Format is what options a person has to place the image in a proportion. Filters are pretty self explanatory in that they allow adjustment to the image. Final benefit that I ranked was social. This is based on how easy it is to follow people, people to follow back and interaction between people.
Looking at each feature independently, each service is better in some and worse in others. For me personally if I rank it feature as important as the other it’s hard to give a clear cut “better” service. For me I find this interesting in that there are so many options out there but to get the best experience they need to work together within my flow to be beneficial. A year from now I wonder if that will still be the case.]]>
Last week at the Time Warner Center it was announced that CNN and Twitter had Partnered with Dataminr for News. I’ve linked to a couple of the articles that speak about the product. For me this was one of the most exciting products that I have worked on so far, from the process start to finish. Taking unstructured data and making something incredibly helpful on multiple levels, designing around a lot of different formats and all the UI challenges that came along with real time info display. On top of all that, just seeing news being made as it is happening is incredibly powerful. Along with the excerpts from the articles I’ve posted are a couple sketches and images from the video presentation.
WIRED: New Twitter Tool Finds Hot Topics Before They Trend
By the time a topic is “trending” on Twitter, it’s probably old news already. Today in New York City, data-crunching company Dataminr announced a new tool for journalists. Its goal is to seek out news stories before they’re heavily reported.
Dataminr For News does this by scanning Twitter, albeit in a slightly different way from existing filter tools. The news-gathering tool was developed as a partnership with Twitter and CNN, whose reporters and editors have been using the software for six months and have helped shape it during that time.
The Verge: CNN announces partnership with Twitter to ‘revolutionize’ news gathering
The company says it now produces about two stories a day based off tips from its Dataminr alerts. “It’s like bionic vision for our reporters,” said Kenneth Estenson, general manager of CNN digital. “It helps us to see things faster than our competition and to act with confidence.”
TechCrunch: CNN And Twitter Partner With Dataminr To Create News Tool For Journalists
Dataminr CEO Ted Bailey said the goal is to “alert journalists to information that’s emerging on Twitter in real time.” Basically, the technology looks at tweets and finds patterns that can reveal breaking news when it’s still in its “infancy.” Those alerts can be delivered in a variety of ways, including via desktop applications, email, mobile alerts, and pop-up alerts.
The program, which was piloted with CNN, isn’t just for alerts, Bailey added. He said it also offers analytics data around who broke the news and how it spread. He also said that it doesn’t just have to be for breaking news, but for finding interesting features, as well.
Mashable: CNN Doubles Down on Twitter-Based Reporting With Dataminr Deal
Dataminr scans Twitter for patterns based on a variety of factors including language and location, which are then put through algorithms to select important events and present them to journalists as alerts. The company previously specialized in providing financial analysis of sentiment on Twitter. It rose to prominence by detecting the impending announcement of Osama bin Laden’s death before the major news outlets.
CNN president Jeff Zucker lauded the partnership as “a really improtant tool and resource as we continue to gather news,” in comments at Wednesday’s event. “There have been many examples already of how we’ve used Dataminr.”
Business Insider: Twitter Gears Up To Launch A TweetDeck On Steroids For Journalists
TweetDeck is already an insanely helpful tool for journalists, but it’s only useful if you know what you’re looking for. Enter Dataminr for News, a soon-to-be released product that is the result of a partnership between CNN and Twitter. The idea is to help journalists get the most out of Twitter by alerting them of emerging information in real time. Dataminr also offers analytics around who broke the news, and how it spread on Twitter.
In order to distinguish between accurate information and what just might be a hoax, Dataminr takes into account 25 different things to determine how legitimate the tweet is, including user influence, geolocation and momentum.
Before I started my morning ride around the Hudson I knew it had the potential to be pretty special. I started biking pretty early just as the light appeared to show the fog. No milestones were beaten as it seemed like I was stopping every couple of minutes to capture something on the water. The temperature fluctuated a little bit during the ride which kept the fog making everything look great.
With the new year upon ourselves it makes sense to make predictions for 2014. Unfortunately this post isn’t going to talk about possible tech turns for this year. What I did want to note is how 2013 ended with a couple tech companies and how they presented the info to anyone that would listen. Over the past month I started keeping an eye on what things were in common from company to company and what was different. This is by no means an exhaustive list but does display data in an informative way that is unique to each tech company.
I started by looking at Pocket, a service that allows users to save online articles for future use. Typically I’ll save something from the desktop by using their chrome extension while on mobile saving mainly from Twitter. The second product I looked at was Google They have collected enough data that it gives a decent overview of what most users on the internet were interested in getting more info about. While I don’t use Tumblr that much I thought it was a good contrast to Google in terms of not what people are searching for but what people wanted to talk about. Spotify is my go to music app for both listening on the go and when I’m at work. Looking to Twitter was a no brainer as I’m pretty much using and looking at their data every day on the hour.
Some positive themes that became apparent had to do with making things easier such as
• Made their service even more enjoyable within context of their product
• Easy to discover new content
• Easy to discover new features
• Balanced bite size amounts of data while making it easy to dive deeper
Some not so positive themes that could be fixed
• Difficult transition from desktop to mobile (phone and tablet etc) with the data
• Overwhelmed people with their info
• Their actual product wasn’t used to display their top content
• Forcing the user to click a lot for little reward in terms of content
Of the examples I looked at, I couldn’t think of a better way than what Pocket did to display their top content in context to their service. They had well defined categories and displayed the content in a digestable manner that rewarded the user to explore further. As a user hovered over a headline they made it easy to add it to their own Pocket for future reading. Even better they made it easy to select all articles within a category by adding a “Save All to Pocket” button. I can not overstate how great this is in terms of saving a user time but also respect the user by giving them more options than just a select all or just select one.
Digging deeper from just displaying categories they also ranked sites. Very quickly it showed what the Pocket community was reading most. Even better if a person didn’t see their favorite site they could drop in a url and Pocket would automatically display the top results if they were in the top 1000 publishers. By clicking Find a Site it would open a search field. Biggest surprises was seeing FT.com articles while nothing came up for Pando.com. By checking out Pocket’s best of the best I saved a ton of new articles that I wouldn’t have been able to discover based on their internal data which made the experience even better.
Google has a lot of data, more so than any one person could probably make info of. Understanding those issue Google broke out their info a flexible way but creating a lot of topics with just enough info that it wasn’t overwhelming. What I appreciated was that everything was essentially a list they included enough images for each topic to make easy to scan. Along with scanning they also made it easy to expand any topic that was of interest. They balanced giving enough of an overview while making it easy to dive deeper. Along with diving deeper they displayed ways of finding new info that an average non power user might find helpful the next time they search.
I liked all the different categories of content that Tumblr displayed. What I didn’t like was that it took 2 or more clicks from the home screen to see anything, and if I didn’t see anything quickly I had to take a lot of clicks to get back. In terms of raw content Tumblr probably had more interesting things than google but it took quite a long time to discover anything. It would have been nice to do something like Google in terms of displaying some of the content from each topic as a window into a more in depth look.
I didn’t find Spotify’s data as helpful as it could have been because the UI hindered my ability to find things quickly. Between all the scrolling, clicking and having the website jump to their native product made the experience not so great. Even bigger of a disappointment when I could see data on my own habits they didn’t even make that part interactive. Hopefully next year they could consider creating an internal app that works inside their product first and port it to the web afterwards.
There’s a ton of info that Twitter could show from the past year and I have to give them credit in terms of using their own product to display the info. The biggest thing that hindered me from finding what Twitter thought were the best tweets was the limited window for scrolling inside a section. If I scrolled too much outside of the window I was focused on it would move the entire page down. Almost every time I used the site I had to scroll back to the top of a category to continue.]]>
Looking back over the last year while I was #walkingtoworktoday, I’m starting to collect some of the images that fit together. Over the past couple of years I’ve really enjoyed seeing what serendipitous themes emerge. Here’s a simple set with other people working while I was walking.
Chances are that if you are a designer and have an iPad of some size you’ve probably come across 53’s Paper app. If you have you may have come across their Pencil that was available order not so long ago. Today I opened an envelop from 53 with the pencil. Below is my unboxing experience. I can’t recall a time when I learned so much about design and inspiration through anolog means.
↧ Oh nice, I haven’t seen a tube for packaging. Nice touch showing some of the ink on the bottom of the tube.
↧ This looks like it has been thought out quite a bit. Note to self, keep shooting every action.
↧ Starting to realize they’ve consider all the details. Glad I’m keeping note.
↧ Can’t wait to use this
↧ Starting to want to know how this came together. The delivery system is to power it up is even smart.
↧ Chances are that this tab isn’t coincidental. Wonder what’s next…
↧ Hope I don’t have to use this but appreciate the protective care.
↧ Everything has been a prelude to these instructions.
↧ Covering the bases, can appreciate that.
↧ This keeps getting better.
↧ Note to self about the type…
↧ These instructions are asking to be open.
↧ On boarding took less than 30 seconds. Note to self, this is a benchmark to test against.