This repository is a stub for your final project. Fork it as a template for your project, and develop your code in the forked repository. For details on how to fork and turn in the project, see section 3 of the github education documentation. After you fork the repository, please enable the issue tracker in the repository settings so that others in the class (including the professor) can provide feedback.
Expand on the readme questions below to provide an overview of the goals, background, and challenges for the final project. You can delete the questions as you write text that answers them, or leave the prompts in place. You can also delete this instruction section of you like.
This is a final project for the Interacting with Data seminar in fall 2015. This project (a very brief, ie 1-2 sentence, overview of the project)...
To view this project, ... (embed visualization here or provide instructions on how to view the project).
Description of data...
- Data source (simulated/ published/ unpublished?)
- Data structure - what are the variables? How are they organized? What states can they have
Examples of previous visualizations of similar data or processes, if any exist... Include links or add images to markdown document... how were data mapped to aesthetics in these previous approaches? Was there filtering?
Shortcomings of previous approaches, or potentially interesting gaps between previous approaches...
How will aesthetic attributes ( X / Y / color / shape / size /texture / etc ) will be mapped to the data?
Are data filtered? ie in some views are some data not mapped to particular attributes of the image? What is the goal of the filtering?
Are there aesthetic attributes that are not mapped to the data? If so, what purpose do they serve ( redundancy for robustness / improve visual metaphor / but data in context / beauty / etc )?
Are any data mapped to more than one aesthetic attribute? Why?
If motion is used, what purpose does it serve ( metaphor (eg representing motion in real world) / transition continuity between views / etc )
To what extent is perspective (eg mappings) controlled by users vs hard coded in advance? How does this project aid in exploration vs exposition?
Was the new visualization successful at providing insight that was not possible or more difficult with previous approaches?
What are the main limitations of new approach?
What are future directions this could go in?