The thing I’m best known for is my work writing PsychoPy, the free easy software for psychology experiments.
+The thing I’m best known for is my work writing PsychoPy, the free easy software for psychology experiments. The work on that and Pavlovia also caused ne to create a company, Open Science Tools, to manage and sustain those tools professionally with a sustainable revenue stream.
PsychoPy offers a unique combination of interfaces, allowing you to create code in a really simple visual interface, with a flow diagram and dialog boxes to control stimuli, and a Python library for those that want to hand-code scripts in the powerful Python programming language.
@@ -66,7 +65,7 @@Programming
PsychoPy has become the software package of choice in labs and undergraduate classrooms all over the World, with over - 20,000 monthly users.
+ 40,000 monthly users.For (independent) metadata about the project see www.openhub.net/p/PsychoPy
diff --git a/research.html b/research.html index 88d820d..e35722a 100644 --- a/research.html +++ b/research.html @@ -44,7 +44,6 @@Jon Peirce
Programming Research Publications - Talks Bio @@ -59,35 +58,28 @@Research
Research
-My research is in visual neuroscience. I’m interested in how we recognise things visually. Surely we must know all that already?! No, not by a long way.
+Visual neuroscience
+ +For years, the main focus of my research was in visual neuroscience. I studied in how we recognise things visually. Surely we must know all that already?! No, not by a long way.
We know a lot about the eye and we do a reasonable job of making devices (digital cameras) that do a roughly similar job. But imagine a piece of computer software that took your digital photographs and identified what was happening in them. “This is a photo of your father, sitting on a wooden chair with a carving of a rose on it. Next to him is a Yorkshire terrier chewing a bone.”
-For you it’s trivial because of your visual neurons. But the world’s best programmers are nowhere near being able to do that with computers. Google and Facebook do now have face recognition capabilities, and they work reasonably, just as long as the person doesn’t turn their head too far or put on glasses. Would that ever confuse you when you recognise a face? Have you noticed the number of websites that ask you to type in the words that are portrayed in a swirly image? That’s because your recognition is so much better than any automated computer detection system.
+For you it’s trivial because of your visual neurons, but imagine trying to tell a computer what rules it should follow to understand the contents of a photograph. Imagine the range of lighting changes and hairstyles that a person can have without you ever failing to recognise that person. Despite the advances of computer AI your visual recognition is so much better than any automated computer detection system, which is why computer "Captcha" challenges to test if you're a robot still largely rely on visual recognition challenges.
We need to know how the brain does such a remarkable job of perceiving the world. That is the endeavour of visual neuroscience.
-Within the field I’m particularly interested in the following topics, with further information below.
- --
-
- Mid-level vision -
- Face perception -
- Experimental design and software -
Mid level Vision
+Research Methods (meta-science)
-A great deal is known about the initial steps of visual processing. We know that humans have neural mechanisms selectively tuned to simple patterns of particular spatial frequencies and orientations. Much later in the visual pathway, in inferotemporal (IT) cortex, cells respond to extremely complex visual patterns such as images of faces. Very little is known about intermediate levels of visual processing, where early visual signals are presumably combined to represent increasingly complex visual features.
+I wrote a package called PsychoPy to help my own research efforts and as that got popular my interests in how to improve research methods, as an area of research, grew. My patricular interests in the area are as follows
-Characterising those intermediate mechanisms is the primary interest of my lab. That work has led us to present psychophysical evidence for visual mechanisms detecting conjunctions, such as detectors for plaids and detectors for curvature.
+I care about precision. Software to measure human performance, for psychology, neuroscience or linguistics, needs to present stimuli and measure responses precisely. I spend a lot of time trying to understand where performance may be poor and also where that matters (there are also times when people worry about levels of precision they don't need).
-For review of the work I’ve done in the area, and why I care so much about it, see my review paper, Understanding mid-level representations in visual processing (Peirce, 2015)
+I care about reproducibility. Scientists should be able to reproduce their studies and analyses but most, surprisingly, couldn't. It's all too easy to forget some detail of how you ran the study, or analysed the data. "How exactly did I calculate the position of the stimulus?", "Was this version of the graph before or after I found that better way to normalise the baseline?". I believe strongly in reproducible experiment scripts and data pipelines where all the details are kept for re-use.
-Understanding Student Satisfaction
+I care about making things easy. I think a lot of errors occur in science because tools are too hard and dont automatically check for errors. I believe easier, more informative, tools make for better science.
-Partly because I’m a university lecturer, and partly out of my interests in how we optimally study behaviour and personality, I’m currently very interested in the factors that drive Student Satisfaction
+I care about sharing. Science is better if we share (isn't everything?!). Partly that makes studies more reproducible but it also accelerates the rate at which we can develop/run studies. I try to encourage this by a) making it easy to share studies using the tools I develop and b) by setting a good example by sharing my own materials (i.e. open-sourcing tools) as much as possible.
-