This page collects materials from technical talks. The
talks are arranged in reverse chronological order (most
Joint work with Kim Bruce and James Noble
Thursday, 14th September 2017
RMod Team, INRIA Nord-Europe, Lille, France
Slides (PDF, Keynote)
Programming Languages Mentoring Workshop at SPLASH 2015,
Pittsburgh, PA, USA
Tuesday, 27th October 2015
This talk was delivered to the Programming Languages
Mentoring Workshop at SPLASH 2015. It's goal was to
inspire students to pursue research in programming
languages. It shares some of the more significant
personal language design achievements from the speaker’s
career, and blends in contrasting points of view from other
members of the programming language design community.
[ Slides (PDF) ]
MASPEGHI Workshop at ECOOP 2015, Prague
Sunday 5th July 2015
The “Expression Problem” was brought to prominence by
Wadler in 1998, in an email message that focused on
demonstrating the superiority of GJ over Java. It is widely
regarded as illustrating that the two mainstream approaches
to data abstraction—procedural abstraction and type
abstraction—are complementary, with the strengths of one
being the weaknesses of the other. Despite an
extensive literature, the origin of the problem remains
ill-understood. I show that the core problem is in
fact the use of global constants, and demonstrate
that an important aspect of the problem goes away when Java
is replaced by a language like Grace, which eliminates them.
| Slides ]
Joint work with Kim Bruce & James Noble
Thursday, 28th April 2011
University of Edinburgh, School of Informatics
We are engaged in the design of a new object-oriented educational programming language called Grace. Our motivation is frustration with available languages, none of which seems to be suited to our target audience: students in the first two programming courses.
What principles should we apply to help us design such a language? We started with a list of 17 "obviously good principles", aware that some of them would conflict with each other. What we didn't expect was that some of them would conflict with good learning.
One of our principles was that the language should provide one "fairly clear way" to do most things. But suppose that an instructor wants to use Grace to compare two ways of doing something? How can one show students the superiority of one approach over another if the alternative approach cannot be expressed? And yet we can hardly fill our language with every miss-begotten language feature of the last 50 years, just so that we can explain to our students why it is better not to program that way!
Prof. Black will outline the principle features of Grace, list the open issues, and listen to your reactions while all of the choices are still on the table. For more information, see http://www.gracelang.org
Slides (PDF, Keynote)
Joint work with Jeff Epstein & Simon Peyton Jones
Tuesday, 26th April 2011,
University of Edinburgh, School of Informatics
Cloud Haskell is a domain-specific language for developing programs for a distributed-memory computing environment. Cloud Haskell is implemented as a shallow embedding in Haskell; it provides a message-passing communication model, inspired by Erlang, without introducing incompatibility with Haskell's established shared-memory concurrency. A key contribution is a method for serializing function closures for transmission across the network.
Cloud Haskell has been implemented; the talk will include some example code and some preliminary performance measurements.
This work was conducted jointly with Jeff Epstein (Cambridge Computing Lab) and Simon Peyton Jones (Microsoft), while Prof Black was a visitor at Microsoft Research, Cambridge, as part of his sabbatical.
Slides (PDF, Keynote)
Joint work with Emerson Murphy-Hill
Friday, 26th October 2007,
University of Toronto, Department of Computer Science
Refactoring is the process of changing the structure of software without changing its semantics. It is widely accepted that continual refactoring of a code base is essential if the code is to stay healthy as it evolves. A high proportion of software changes may be due to refactoring; Xing and Stroulia (2006) report a figure of 70%.
Refactoring can be performed by hand, or by semi-automatic refactoring tools: the advantage of using a tool ought to be both greater speed and greater accuracy, that is, the tool should eliminate the possibility that the semantics of the program is accidentally changed.
In spite of these advantages, our studies have shown that, 8 to 10 years after refactoring became mainstream, the uptake of refactoring tools amongst both novice and experienced programmers is low. This translates to poorer software structure, an increased probability that bugs will be introduced by manual refactoring, and reduced productivity.
In this talk we argue that a major reason for the low uptake of refactoring tools is that many of the tools that the community has produced are not appropriate for the tasks that programmers need to perform. Rather than fitting into programmer workflow, many present-day tools disrupt that workflow. Rather than supporting continual refactoring that aims to maintain healthy code ("floss refactoring"), many tools are designed for refactoring episodes that aim to fix major problems ("root canal refactoring"). Rather than supporting incremental, exploratory refactoring, many tools require that the programmer plan the refactoring first. Rather than providing error dialogues that help the programmer achieve a successful refactoring, many tools — after first demanding extensive configuration — simply give up with a cryptic error message.
Based on these insights, we are working towards a new generation of refactoring tools that have been designed with the programer's workflow in mind, and that have been tested for usability on practicing programmers.
Slides (not including movies)
Movies: activation statement helper statement view
Sunday, 22nd October 2007
Programming Languages and Integrated Development Environments (PLIDE),
OOPSLA 2007, Montréal, Canada
For the last 15 years, implementors of multiple-view programming environments have sought a single code model that would form a suitable basis for all of the program analyses and tools that might be applied to the code. They have been unsuccessful. The consequences are a tendency to build monolithic, single-purpose tools, each of which implements its own specialized analyses based on its own optimized representation. This restricts the availability of the analyses, and also limits the reusability of the representation by other tools. Unintegrated tools also produce inconsistent views, which reduce the value of multiple views.
This talk outlines some architectual patterns that allows a single, minimal representation of program code to be extended as required to support new tools and program analyses, while still maintaining a simple and uniform interface to program properties. The patterns address efficiency, correctness and the integration of multiple analyses and tools in a modular fashion.
Sunday, 10th June 2007
History of Programming Languages III
San Diego, California
Andrew P. Black, Norman C. Hutchinson, and Eric Jul
Emerald is an object-based programming language and system designed and implemented in the Department of Computer Science at the University of Washington in the early and mid-1980s. The goal of Emerald was to simplify the construction of distributed applications. This goal was reflected at every level of the system: its object structure, the programming language design, the compiler implementation, and the run-time support.
This talk describes the origins of the Emerald group, the forces that formed the language, and some of Emerald's more interesting technical innovations. It also touches on the influences that Emerald has had on subsequent distributed systems and programming languages.
Tuesday, 7th September 2004
European Smalltalk Users Group, 12th Annual Conference (ESUG 2004)
Have you ever needed to do some analysis of the code in some Smalltalk methods, and spent a lot of time and effort parsing the source and extracting all the information that you needed, only to find that your analysis was still really slow, because you had to grab the source from the changes file?
Abstract interpretation is a technique that in many cases can allow you to collect the same information by extracting it from a CompiledMethod. Although Abstract interpretation sounds scary, in Squeak it is really quite straightforward, because the basic framework is already in the image. Moreover, it can also be very fast, because bytecodes are designed to be interpreted quickly.
In this talk we will illustrated what can be done with abstract interpretation, explain the framework provided in Squeak, and show what can be done to dramatically improve performance if this turns out to matter for your application.
Keynote package (gzipped, 3.9 MB)
PDF with animatations (4.0 MB)
Squeak Changeset for the SendsInfo abstract interpreter
Friday, 28th May, 2004
Paper presented at the International Conference on Software Engineering,
Traits are an object-oriented programming language construct that allow groups of methods to be named and reused in arbitrary places in an inheritance hierarchy. Classes can use methods from traits as well as defining their own methods and instance variables. Traits thus enable a new style of programming, in which traits rather than classes are the primary unit of reuse. However, the additional sub-structure provided by traits is always optional: a class written using traits can also be viewed as a flat collection of methods, with no change in its semantics.
This paper describes the tool that supports these two alternate views of a class, called the traits browser, and the programming methodology that we are starting to develop around the use of traits.
- 820 kB PDF file (with animations)
- 2.8 MB Powerpoint file (exported from Keynote; quality uncertain)
- 1.9 MB Keynote file
Monday 25th August 2003
Paper presented at the European Smalltalk Users Group Conference
Abstract:Much of the elegance and power of Smalltalk comes from its programming environment and tools. First introduced more than 20 years ago, the Smalltalk browser enables programmers to ``home in'' on particular methods using a hierarchy of manually-defined classifications. By its nature, this classification scheme says a lot about the desired state of the code, but nothing at all about the actual state of the code as it is being developed. We have extended the Smalltalk browser with dynamically computed virtual categories that dramatically improve the browser's support for incremental programming. We illustrate these improvements by example, and summarize the algorithms used to compute the virtual categories efficiently.
- 1.4MB PDF file (with animations)
- 1.3MB Quicktime movie (includes animations)
- 12.3MB Quicktime movie (full-screen quality, includes animations)
- 1.0MB Powerpoint file (exported from Keynote; quality uncertain)
- 2.4MB Keynote file
Saturday 19th January 2002, 08:45 — 09:45
Invited talk at the Workshop on the Foundations of Object-Oriented Languages,
In association with POPL 2002,
Portland, Oregon, USA
Abstract:This talk presents a personal view of the rôle of objects in distributed systems, past, present and future. It then surveys my current research as it relates to my view of the future.
Monday 10th December 2001
Presented at PECOS Workshop,
IAM, Universität Bern, Switzerland
Abstract:Object-oriented, concurrent, and event-based programming models provide a natural framework in which to express the behavior of distributed and embedded software systems. However, contemporary programming languages still base their I/O primitives on a model in which the environment is assumed to be centrally controlled and synchronous, and interactions with the environment carried out through blocking subroutine calls. The gap between this view and the natural asynchrony of the real world has made event-based programming a complex and error-prone activity, despite recent focus on event-based frameworks and middleware.
This talk presents an overview of the Timber programming language, which offers programmers a consistent model of event-based concurrency based on reactive objects. This model removes the idea of "transparent blocking", and naturally enforces reactivity and state consistency. We illustrate Timber by a program example that offers substantial improvements in size and simplicity over a corresponding Java-based solution.
Friday 3rd November 2000, 11:00 — 12:00
Department of Computer Science, Oregon Graduate Institute, USA
Abstract:Extreme Programming is a methodology for producing programs that satisfy the customer's requirements as to functionality, timeliness and budget. It is one of a number of new "lightweight" methodologies that look at each of the things that the software engineering gurus have been telling us to do, and ask: what would happen if we don't do that at all?
If the answer is "we would fail", as it is with testing and coding, then the practice is good, and we turn the dial for that practice up to 10. If the answer is "no one would notice", or "we would get more work done", as it is with formal design reviews and writing documentation, we turn the dial down to 0. Hence the name: Extreme Programming.
Of course, this is overly simplistic and couldn't possibly work. The amazing thing is that it seems to ...
Why should you care? Come and find out.
Slides (180kB) Updated slides (2001.05.16) with Pictures (3 MB)
Friday 27th October 2000, 12:00 — 13:00
Microsoft Cambridge Research Laboratory, England
Abstract:Since the dawn of creation, which, for the purposes of discussing computer programming we will take as 1950, programming has been a been a linear activity, in the sense that a program is a linear sequence of statements. When we use indentation to group statements, we are attempting to add some two dimensional structure to a linear artifact, in order to make the program easier to understand.
Instead, imagine a program at a higher level of abstraction: rather than dealing with program text, treat the program as a much richer abstract program structure (APS) that captures all of the semantics, but is independent of any syntax. Conventional one and two dimensional syntax, abstract syntax trees, class diagrams, and other common representations of a program are all different "views" on this rich abstraction.
"Perspectives" is a new approach to software development that uses an APS to describe programs. In this setting, programmers move between different views of a program to help them understand the original code, and to isolate relevant dimensions when changes are required. They create new views that collect together all of the code pertaining to a particular aspect of concern, so that this aspect can be understood in isolation; code irrelevant to the task at hand is out of sight. Such a system has the potential to support more principled and more reliable evolution of software artifacts, reducing the risks and costs that result from being constrained to a single view of a program. We also believe that Perspectives has great potential as an educational tool, since it will enable a complex program, for example, a compiler, to be presented to a class of students incrementally. At any time, the current view can focus on the topic of the current lecture, and extraneous detail can be hidden.
There are no slides from this talk; it was given using the whiteboard. However, a position paper from the OOPSLA 2000 Workshop on Advanced Separation of Concerns covers much of the material.
Thursday 26th October 2000, 16:15 — 17:15
Cambridge University Computing Laboratory, England
Abstract:Infopipes are an abstraction for real-rate information flow in a distributed system. Our goal is to simplify the construction of distributed, real-rate applications.
A typical information pipeline might bring data from a real-rate source, such as a microphone or video-camera, to a real-rate sink, such as a loudspeaker or video monitor. To be useful, such a pipeline must offer guarantees on the latency, rate and jitter of the flow.
Infopipes are a new system-level abstraction that can be used to build such a flow. A pipeline can be built from simple Infopipe components such as buffers, tees, pumps, and filters. Components push or pull information items into or from their neighbours. Each component has known properties such as latency, bandwidth and jitter; our goal is to be able to calculate the overall properties of the whole pipeline as a function of the properties of the individual components.
The research questions for which we seek answers include
- What is the appropriate interface for Infopipe components?
- Can we prevent the construction of pipelines that cannot flow, such as a pipeline containing two buffers connected to each other?
- When one component pushes an item into another, how can we ensure that the pushing thread returns in a timely fashion?
- When in item is pulled from an empty buffer, what happens?
- When an item is pushed into a full buffer, what happens?
- Can alternate, more complex behaviours be constructed from simple components?
In our previous research at OGI, we have developed complex feedback-controlled scheduling mechanisms that maintain real-rate behaviour and user-specified Quality of Service in the face of variable resource availability (CPU and network). An important test of the success of Infopipes as an abstraction is whether we can encapsulate these mechanisms as reusable Infopipe components.
Wednesday 25th October 2000
Panel Presentation at Joint NSF-DARPA Workshop on Future Directions in Hybrid and Embedded Systems,