Computers and Electronics in the Arts

Australia 75, Canberra, March 1975

Cover of the programme for Australia 75. The image was created by Stan Ostoja-Kotkowski using distorted laser images.

In Australia the 1970’s, especially between the election of the Whitlam government in 1972 and its “sacking” in late 1975, were a period of great creative energy, innovation in ways of looking at ourselves as a society, and great optimism for the future: the Vietnam War was ending (30th April 1975, fall of Saigon) and colour television had just arrived (1st March 1975). With Gough Whitlam as Prime Minister it was a good time for the arts, a good time to go to university and there was plenty of interest in science and technology. The Australian Council for the Arts was revamped as the Australia Council and given a solid funding base.
The National Gallery of Australia made some controversial purchases. These were actually supported by the government of the day (and have proved to be very astute investments). The Interim Council of the Australian Film and Television School (now the Australian Film, Television and Radio School) was established, as was the Video Access network.
In late 1972, the Aquarius Foundation of the Australian Union of Students (led by Graeme Dunstan and Johnny Allen) drew together and funded a collective of community activists who were interested in video and media production (including Mick Glasheen, Joseph el Khouri, Melinda Brown, Jack “FatJack” Jacobsen, Tom Barber and Jonny Lewis) to set up a cable network in Nimbin, NSW, during the Aquarius Festival of May 1973. Many of the collective then decided to continue the project and formed into Bush Video which operated as an artists video production facility in Sydney until mid-1975. Drawing on the Nimbin project and the Canadian Challenge For Change project, in early 1974 the Film & TV Board of the Australia Council set up a network of Video Access Centres (in association with the Commonwealth Department of Urban and Regional Development) and through the Experimental Film Fund and subsequent initiatives supported many other smaller developments in new video production and technologies. The Video Access Centres were basic video production facilities in Sydney, Melbourne, Brisbane, Adelaide and Perth intended to open up access to the new video technology and make it available to the public for production of community-based media debate, to independent producers (some of whom came from the Sydney and Melbourne Film-makers Co-ops, or who were a slightly later generation of the kind of people that made up the Co-ops, or Film and TV School students) and similar, as well as to experimental video artists, for example members of Bush Video.
Around 1975 there were three levels of computing:
Sydney University's first mainframe computer, SILLIAC.

Mainframes, such as those by IBM, Univac and English Electric, belonging to the big institutions and SILLIAC built at the University of Sydney Computer Science Dept.

Here, the University of Sydney’s first mainframe: SILLIAC, built within the Department of Computer Science and commissioned in 1956. The operator in this image is Nerida Smith.

The PDP-8 facility set up by Doug Richardson in the Computer Science Department of the University of Sydney.

Minicomputers (eg, Digital Equipment Corporation’s PDP-8) used in research laboratories and workstations used in certain kinds of bureaus in, for example, architectural and engineering work.

The image shows a DEC PDP-8 minicomputer (plus EMS Synthi A audio synthesiser and b&w camera) used by Doug Richardson in the computer graphics lab at the University of Sydney.

A single board kit computer owned by Stephen Jones with a graphics memory card in one of the S-100 buss slots on the board.

Just beginning to appear, the Microcomputers – mostly kit computers based on Intel’s 8080 or Zilog’s Z80 chips and built by enthusiasts for their own experimentation.

This is my first computer: Z80 CPU, S-100 buss, 64Kbytes RAM, Z80 counter-timer. PROM based bootstrap. Chips socketed and hand soldered to the motherboard.

But computer technology was still very difficult to get access to. Mainframe computers were large monsters hidden away in air-conditioned floors of banks, insurance companies, government offices and academic research institutions, while science fiction writers dreamed up all sorts of scenarios in which computers existed as a pending Big Brother which would take over the world and turn us all into slaves.
For most people computers were inaccessible. The only likely contact was through computer forms on punched cards and the phrase “Do Not Fold, Spindle or Mutilate”. Occasionally you could get to see a computer at a University Open Day, where you might have been fortunate enough to receive a computer print-out of a calendar with a picture of Snoopy at the top of it, but you would almost never get to use one for your own creative work.
Computers, especially for the politically active, were associated with the high technology companies in the US (that made their money from the Vietnam War) and were not really acceptable among activists and most artists. The minicomputer, in some ways was even more inaccessible since it tended to be hidden away in laboratories or back rooms of offices where it was attended by a specialist acolyte. To some extent it was being used to develop graphics (mostly for data visualisation) or for system development in university computing labs. And the microcomputer was something that only Ham Radio operators and the readers of popular electronics magazines were likely to know about. For activist artists social issues: feminism, the anti-freeway movement, green bans and the anti-whaling campaign were the order of the day and the rare exhibition of Computer Art made very little impact.
The first desktop computers that the public could buy were just becoming available and, if you bought one, it came as a kit that you had to build and program yourself [see picture above]. The only people who were likely to do that were electronic hobbyists and a few young scientists.
One of those scientists, Doug Richardson, a young computer programmer at the University of Sydney, had recently completed a computer graphics software project using a Digital Equipment Corporation PDP-8, a mini-computer which had a large radar screen as a graphical display and was used in many scientific laboratories both in Australia and overseas.
Doug Richardson running a program on the PDP-8 computer in the Computer Science Department of the University of Sydney.

Doug Richardson loading a program into the PDP-8 using a teletype. The circular screen to his left is the graphics screen for the computer.

Some of the work that he and other artists interested in experimenting with technology had produced with his system had been shown at exhibitions in Sydney and Brisbane. This led to Doug being invited to organise an exhibition of computer arts for the Australia 75 festival to be held in Canberra over 7th – 16th March 1975. He decided to call the exhibition “Computers and Electronics in the Arts” and invited many of the people he had worked with to contribute and asked them to invite others they knew to also contribute.
There had already been some computer-art events, though mostly overseas, (notably, Cybernetic Serendipity, ICA, London, 1968, and New Tendencies [1], Zagreb, Yugoslavia, 1968). In Australia the first computer graphic/art work was done with the system that Doug Richardson developed between 1969 and 1974 on the PDP-8 mini-computer in the Computer Science Department at the University of Sydney. Initially, two local artists, Frank Eidlitz [2], a graphic designer who had become acquainted with computer graphics in the late 1960s while visiting the US and Gillian Hadley [3], as well as the video collective Bush Video used Richardson’s Visual Piano (as he called it). The members of Bush Video came out of architecture (Mick Glasheen [4] and Tom Barber (tensegrity) [5], film-making (Joseph el Khoury [6], Melinda Brown [7]), photography (Jon Lewis [8]) and experimental electronics (Fat Jack Jacobsen [9], Ariel [10]).
Like much modern art in general, the public was not well acquainted with it and the rare exhibition of Computer Art made very little impact. For people who were engaged with current art and political culture: Social issues, feminism, the anti-freeway movement, green bans and anti-whaling were the order of the day. Computers, especially for the politically active and many young artists, were associated with the high technology companies in America (that made their money from the Vietnam War) and were not really acceptable among activists and most artists.
This was the “Whitlam era”, during which there was something of a period of intense development in the arts and, in more members of the community than there had been previously, an awareness (perhaps only a suggestion) of the liberatory possibilities of new technologies and new communications media. This no doubt had to do with some aspects of the hippie movement, the publishing of the Whole Earth Catalog and perhaps also derived from McLuhan’s The Medium is the Massage, although other parts of the hippie movement were diametrically opposed to these developments.
But the period brought about a willingness to explore the potentials of new technologies [with the intention of liberating them from the exclusive grasp of the multi-national corporation]
  • beginning with a widening interest in experimenting with video that, in Australia, may have been first attempted by the artists who had established Inhibodress in late 1971,
  • the development of a computer graphics system for artists’ use by Doug Richardson at the University of Sydney over 1970-74, the development of a computer graphics system at the ANU Department of Engineering Physics.
  • the re-inspiration of modern dance, especially through Philippa Cullen‘s interest in making the music/sound follow the dance through the use of interactive sensors (theremins), electronic music and video documentation.
  • the establishment of Bush Video as an experimental video and electronics collective in Sydney.
and a great fecundity in the presentation of new and electronic music [for example AZ Music, directed by David Ahern, and others like Greg Schiemer and Martin Wesley-Smith, the Melbourne New Music Centre, Steven Dunstan, Harvey Holmes and Harvey Dillon with their Timbron, and doubtless others whom I don’t know about].
For all these people experimentation, improvisation, mixed-media and the combining of the various forms were desirable and elegant things to do.
Later, from about 1976, other techies and artists began to write graphics software on the small systems that developed using the recently arrived kit computers based around the Intel 8080 and the Zilog Z-80 CPU chips S-100 buss (Ariel aka Mark Evans [10], Ray Lade). You still had to build your computer at that point and so there was only a small group of interested techies (e.g., John Hansen), radio hams, and one or two artists (e.g., Shaun Gray [11]) who would take this task on. It wasn’t long after that when the Apple IIe appeared. The first one I saw was owned by Guy Dunphy.
A little later (c. 1975-76) computer-aided design (CAD) in Architecture began to be consolidated, computer-aided electronic circuit design and simulation developed and the first attempts at computer graphics for television were made, but all of these practices had to wait up to another ten years for their fruition. A little earlier (c. 1965-70) the analogue computer, an early to mid 20th century computing technology that could simulate engineering functions to an accuracy sufficient for most purposes, became the music synthesiser and was starting to appear as the video synthesiser.

Australia 75 – Computers and Electronics in the Arts

Doug Richardson had been developing his drawing package over the early 1970s. More or less as a culmination of this project he accepted an invitation to curate an art and technology exhibition called Computers and Electronics in the Arts [C&EitA] at the Australia ’75 Festival of Creative Arts and Sciences, to be held in Canberra, March 7-16th, 1975. This exhibition turned into the single most important meeting place for most of the people involved in electronic arts in Australia at that time. Having already been working with a number of artists, he decided that the Australia ’75 exhibition should showcase the work in experimental technology that a variety of academics and artists were doing and the systems they used. So he drew together many of the people who were doing interesting things in computer graphics and computer music, electronic synthesisers, video and interactive dance performance. These people (those who could be in Canberra at the time of Australia 75) made arrangements with their departments, organised sponsorship or just decided to get involved on the basis of their own resources.
Richardson’s motivations in producing Computers and Electronics in the Arts are canvassed in the exhibition programme.

“The opportunity to present the computer in a more favorable light than that of monster is one that Richardson has been hoping for. He considers that most of the established artists have an inbuilt fear of the computer, suggesting it is some sort of threat to a person’s humanity, his job, or his privacy.The problem as he sees it is to make people feel at ease with the computer, and that was the idea behind the first film that he ever made, and the basic idea behind this exhibition.The adaptation of computers to all art forms is in his opinion a logical progression;and as inevitable as the family motor car. Once the prototype has been dreamed up.” [13]

Richardson wanted people to see that the computer could be adapted to all artforms and the idea of the exhibition was to demonstrate this with work from artists, musicians and computer scientists “with an understanding of the potential of machines” [3] from whom “will come tomorrow’s accepted visual and aural self-expression by means of the computer” [3] and these means will be available to everybody.
The exhibition predicted the ubiquity of computers for everyone and, by bringing together “the country’s leading artist/technologists”, demonstrated some of the possibilities that newly developing technologies in computing, video and audio production, and interactive performance, would bring about. The exhibition also aimed to ease access to the new technologies and to make them less threatening, making it possible for the audience to not only watch but to participate in their use.
The people who contributed to the exhibition by presenting their artworks and experimental technologies, ranged from artists working with lasers, portable video technology, analogue music synthesisers and a prototype digital music computer, to composers working with computer programmed music for live musicians, engineering experimenters from the University of NSW and the Australian National University and dancers who made their own music using pressure sensitive floors.
At the exhibition and the daily series of workshops that made up Computers and Electronics in the Arts Richardson showed some of his work, Bush Video was represented by Ariel and Joseph El Khoury, John Hansen brought his video synthesiser and Stephen Dunstan brought his sound synths from Melbourne. Dunstan was accompanied by the Melbourne New Music Centre. Iain McLeod and Chris Ellyard of the ANU Department of Engineering Physics (the first computer science group within ANU) were involved, Philippa Cullen and her dance company with a pressure sensitive floor, Stan Ostoja-Kotowski showed off his Laser Chromoson and his interactive theremins, Harvey Dillon with the Timbron (a music synthesiser from UNSW), and computer composed music by James Penberthy. I was there as a visitor from Brisbane and ended up videotaping much of Philippa Cullen’s work and the performances of some of the electronic music groups.
The experiments presented ranged from artists using lasers, video technology and music synthesisers to composers working with computer programmed music for live players to engineering experimenters in the universities.

The Lakeside Hotel Ballroom as it was fitted out for Computers and Electronic in the Arts

Gough Whitlam, then Prime Minister, opened the festival in the C&EitA exhibition site at the Lakeside International Hotel ballroom on the evening of Friday 7th March, 1975. He noted:

“There has never been a time when the sciences and the arts – activities so pervasive, so popular, so fundamental to our way of life – were less able to support themselves. I must say that our investment in these fields should be a community investment, that these basic areas of research and creativity should be protected from any suspicion of private gain or official patronage.

“Only government can provide the funds for an Opera House, a nuclear reactor, a theatre, a telescope or an orchestra. Modern enlightened governments accept these things as being just as much part of their responsibility as education or health.” [12]

A series of cameos involving sight and sound synthesis followed. Despite the problem of one cantankerous computer, electronic music emanated from synthesised frequencies, a string quartet played music as a computer composed it, dancers on a “pressure sensitive floor” created music as they moved. [13]

Participants:

Philippa Cullen and Dancers

A highly innovative and socially thoughtful dancer. Whilst also exploring the dance culture of many nations she was deeply interested in the relationship of new technologies to the development of dance. She was especially interested in interactive techniques and brought with her a set of four pressure sensitive floors. These should have been coupled to an audio synthesiser but for some reason this failed and there was a long night had by several participants in integrating her floors into the rest of the equipment that had been established, which had the effect of integrating much of that equipment into what became (at least for Cullen’s performances and workshops) a single system. She had also been experimenting with biofeedback (EEG and EMG) sensors. [Sadly, Phillippa died in India in July 1975.]

Greg Schiemer, Phil Connor

“A teaming of scientist and musician. They explore new methods of performing and creating music. Their computer based system is an outgrowth of their own research and the help given to Phillipa Cullen.” [A75 exhibition catalog.] Schiemer had brought two pedestal theremins which a dancer could stand on so that when they remained still there was no sound but as soon as they moved the sound echoed their movements.

Bush Video:

“A group that have produced hundreds of videotapes on many subjects. They show some of the artistic uses of video technology and record Australia 75.” [A75 exhibition catalog.]

ANU Engineering Physics (AI) group:

In particular Iain Macleod and Chris Ellyard.
They brought the department’s PDP-11 and their string digitiser pad and several VDUs (Visual Display Units). They had built a frame store in core memory that gave 512 pixels x 512 lines and configurable colour mapping. It was this device that prepared computer graphical output (and in particular the processed data from Cullen’s floors) for the video systems that were available to synthesise and to display to the audience. [See their papers]

Stan Ostoja-Kotowski:

Stan Ostoja-Kotkowski, had been in Canberra for a couple years, 1971-72, as an artist in residence with the ANU School of Physical Sciences on a Creative Arts Fellowship at the ANU and had built a Laser Chromoson tower for the 1971 Australian Universities Arts Festival, which was held at ANU. His main contribution to the Australia 75 Festival was a laser tower in Garema Place in the civic centre of Canberra. However he also showed several interactive theremin sculptures built for him by the workshop of the Department of Engineering Physics. These objects would respond to people’s presence with coloured light and sound. The Laser Chromosons were 75cm diameter spheres with theremins built in that made them respond with light and sound as people approached and touched them. [this description needs confirmation].

James Penberthy:

Is an Australian composer who prepared and presented string quartet performances of a computer music system he had developed with the assistance of programmer Dan Chadwick. He was supported by the Australia Council and ICL Ltd.

John Hansen:

at that stage was an electronic engineer who had developed electronic jewellery using LED’s (c.1972) and was exploring video synthesis. He presented his latest creation, sponsored by an Australian Council for the Arts grant, which “is a video synthesiser, producing exciting special effects on colour television in response to musical or visual input.” [A75 exhibition catalogue.]

Steve Dunstan:

Built his own electronic instruments, audio synthesisers with circuitry built on Perspex sheet and intuition. He was also a jazz musician and composer and brought his synths and instruments much good cheer and elegant improvised music “His music and his instruments are a reflection of his personality – fascinating and unique.” [A75 exhibition catalogue] [he died in the 80s in somewhat mysterious circumstances]

The Timbron:

A music synthesiser with a pressure sensitive playing surface similar to the neck of a ‘cello. It was developed at the University of N.S.W. by Associate Professor of Music Roger Covell and Professor of Electrical Engineering Harvey Holmes. [A75 exhibition catalog.]

Ancillary notes of interest:

Richardson relates a tale of how when they were to set up a last minute booking delayed them until the evening before the opening day and they worked all night to bring in and set up all the equipment. There were two mini-computers involved and they had to be craned in through the window. They got everything running just in time for the opening by the Prime Minister, Gough Whitlam, and Richardson told me that he was so exhausted he could hardly speak when introduced to him. [15]
Another interesting tale from Australia ’75 that illustrates very well the way things developed in those days. Phillipa had brought with her a set of floors with pressure sensitive devices in them so that as the dancers worked on the floors they could control the sounds they were dancing with. The connection between the floor and the synthesisers failed and didn’t seem to be fixable so Philippa had decided that she and her dancers should go home. There was no more interactive possibility it seemed. So a number of the people who had systems in the ballroom got together to solve this problem for her. Iain MacLeod and Chris Ellyard (of ANU Engineering Physics) were running a PDP-11/40 system lent by DEC. They wrote software that allowed the computer to read the floors via an analogue-to-digital converter (A/D) using this data to draw a “history” of the dancer’s movements as an image. The image was then sent to John Hansen’s video synthesiser where it was combined with camera images of the dancers and video feedback. The dancers moved to music supplied by Steve Dunstan and his audio synthesiser, which also supplied audio modulation for keying (matte) control of John’s video synth. The output was displayed on a bank of colour video monitors and had everybody very excited. Performances on this integrated system drew large crowds and opportunities to play with the system were taken up at every possible instance. All this happened almost overnight and illustrates just how much interest there was in integrating all sorts of disparate systems as well as how willing everyone was to do everything possible to make this integration work.
I remember entering a darkened cave full of the screeches and rumbles of electronic music, the hum of computers and the luminous evanescence of video displays, dancers working the floor and the floor driving the computer to control the sound (this was a very special floor). Bush Video had a wall of monitors, Philippa Cullen and her company were the dancers, and John Hansen had his video synth and his electronic jewellery.

Posters for Computers and Electronics in the Arts.

There were two posters for the Festival. The main Australia 75 poster was a laser image produced by Stan Ostoja-Kotkowski. It was made by shining a ruby laser through a piece of once-molten glass so that the laser is diffracted and bent when it reaches a screen. The refracted laser image was photographed and printed by multiple exposure to give the three merging copies of it in the poster. The image was also used on the front cover of the main programme for Australia 75. [see the top of this site.]

Making a picture from the alphabet

The second poster is a very interesting example of what could be done around 1975 in making images with scientific computers of the time. It was made for use as the poster side of the programme for the “Computers and Electronics in the Arts” exhibition. The image was designed by Alistair Hay, a graphic artist at the Australian Information Service (AIS), and produced with the assistance of computer scientists at the Department of Engineering Physics in the Research School of Physical Sciences at ANU, in particular Iain Macleod who wrote the computer programmes that helped to scan the photographs and print out the poster and Chris Ellyard, who did the actual operational work to produce the poster.

Hay was interested in the potential for using computers in design and producing a poster specifically for the exhibition provided the perfect opportunity. Starting with a photograph taken by Malcolm Lindsay (an AIS photographer) of a pair of ballet dancers at the barre, Hay re-interpreted it for the computer. Because computer memory was very limited and very expensive in the 70s, bit-mapped graphics [photographic quality images] were not widely produced. However grey-scale images could be represented by using alpha-numeric, or specially designed, character blocks of varying density [contrast, or darkness or brightness] as surrogates for the grey-scale value of small regions of the image. Effectively the image was made with large pixels each presented as a character. This kind of image is not unlike what we now call ASCII graphics.

For Hay and the Engineering Physics team the solution was to design a special set of [density surrogate] characters to make a computer printing of the photograph. Hay designed a set of pictograms using abstract drawings of dancers transformed into graphic characters. The need to have a density range from dark to light meant that of the four pictograms chosen from the set he designed, two were rendered both as negatives and as positives, along with a black and a white block, and each representing one of eight tonal values. The set of pictograms and the main photograph were scanned into the computer at ANU Engineering Physics.

Scanning, Plotting and Printing

The photograph and the pictograms were scanned into the computer at the ANU using a scanner made from a modified plotter. A plotter is device which draws lines onto paper rather than printing rows of characters or image pixels onto it. By replacing the pen with a special light-detecting head the plotter could be made to scan an image.

Once the photograph and the pictograms had been scanned in, the grey-scale picture values of the pictograms were substituted for tonal regions in the photograph. The image could then be plotted out using the plotter in the manner for which it was originally designed. This produced an approximately 10 metre long by 25 cm wide copy [plot, printout] of the poster in a segmented form. These long strips were then cut down, arranged and glued together onto a backing board, forming the image for the poster. The result was then photographed to a large transparency, printed to fibre paper and used to make the poster. The main image shows the finished poster and the image below shows a test segment of the plotter output, (the plotter raster-lines can be clearly seen).

New Music and New Electronic Instruments

Electronic music has a variety of origins. At the beginning of the 20th century one Thaddeus Cahill began building an electronic sound synthesiser, known as the Telharmonium, which he completed in 1906. It used 145 alternators and was controlled from touch-sensitive keyboards. It was huge and produced around 10,000 watts of sound. [16]
In the 1920s the Russian radio engineer, Leon Thermin, discovered that when a pair of very high frequency oscillator signals were mixed together any change in the relative frequency of the two signals would produce a lower frequency tone, a “beat” frequency, that represented the difference between the two. This not only provided a useful way of modulating radio waves with speech and music [FM radio] but it could also produce pure tones. He designed an electronic musical instrument, the Aetherophone, later known as the Theremin, around this principle. In a theremin, one of a matched pair of oscillators has a fixed frequency while the frequency of the other is determined by stray capacitance in the region of its wire antenna. When a person comes near the antenna the capacitance of the electric field surrounding it will change thus altering the frequency of the secondary oscillator so that when the outputs of the two oscillators are mixed together they produce a “beat” frequency in the auditory range. The theremin thus produces sine waves of a frequency determined by the distance between a person (or their hand) and the antenna.

The basic arrangement of the theremin. The frequency of the variable oscillator is controlled by the capacitance on the antenna, which varies according to the presence of bodies and other things. When subtracted from the frequency of the fixed oscillator a “beat” is generated (think of tuning a guitar). As the person moves around near the antenna the capacitance on it changes, changing the frequency of the variable oscillator which in turn changes the audible “beat” frequency. [Graphic: Stephen Jones]

A variety of electro-mechanical and electronic devices were built over the next 20 years until in France in the later 1940s Pierre Schaffer made a series of works, which became known as Musique Concrète, using natural and man-made sounds recorded onto phonograph records. [17] The development of magnetic recording, initially onto wire and then ribbons of tape coated with a magnetic material (Ferrite), meant that natural and artificial sounds could not only be recorded but edited and manipulated through cutting and splicing the tape in any desired order and thus the composer or sound artist could assemble the recorded sounds into musical patterns. This probably arises out of the production of optical sound tracks for film.
In 1944 the Australian composer Percy Grainger began developing his “free music” machines discovering that by modifying the use of piano rolls and making similar but more graphical paper rolls and attaching them to various electro-mechanical and electronic devices he and his co-workers, Eva Grainger and Burnett Cross, could produce a music of continuous tones and tone clusters based on his interest in gliding sounds.[18[ The development of magnetic recording, initially onto wire and then magnetically coated ribbons of tape, meant that natural and artificial sounds could not only be recorded but edited and manipulated through cutting and splicing the tape in any desired order and thus the composer or sound artist could assemble the recorded sounds into musical patterns. This probably arises out of the production of optical sound tracks for film.
In 1951 French National Radio established a studio for the production of concrete music which was used by many composers. At about the same time other composers in France and Germany were using the facilities of the radio studio maintenance workshop to make sounds for tape. Northwest German Radio in Köln (Cologne) set up an electronic music studio using oscillators and other test instruments specifically to make strictly electronic sounds.
Karlheinz Stockhausen is the best known of the composers who used the studio, but the emphasis on the pure electronic sound was soon hybridised with recorded “concrete” sounds in his Gesang der Jünglinge. [19] The original approach of the Cologne Radio studio was a form of additive synthesis and it is just a short step from assembling a collection of test oscillators on a bench to putting them in a box with filters and other modifiers and producing a sound synthesiser. [20] Once you have an electronic sound studio in a room, it only takes a little redesign and miniaturisation to put it into a box and make it cheap and portable so that musicians could buy a “synthesiser” and take it home or to the concert hall.
The hybrid use of electronics in music production was then greatly developed by the American composers John Cage [21] and David Tudor and it is through their work, and the improvisational approach of the English composer Cornelius Cardew as well as Stockhausen, Varese, Schaffer and others that electronic music reached Australia.
One of the purposes of Computers and Electronics in the Arts was to bring together many of the people who were developing and working with new electronic instruments and new ways of making music through improvisation and the use of synthesisers and computers. The exhibition opening by Gough Whitlam (see above), included a concert with performances of new music works by Larry Sitsky, James Penberthy, Greg Schiemer, Roger Covell and other participants. There were also demonstrations of some of the computer and electronic imaging systems on show.
Larry Sitsky presented a work for four theremins called The Legions of Asmadeus which used the theremins attached to Stan Ostoja-Kotkowski’s interactive paintings and sculptures. Sitsky says of the work:

“I called it that because it sounded like all hell. … We got four people to “play” it, and basically they were really limited. All you could do was move your hand there and back [to control] the pitch. Since these weren’t skilled players no subtleties were possible. If you touched the thing it’d go wooop and just conk out. You could add echo and there were a couple of little gadgets you could play with. But there wasn’t much dynamics, pitch, and [it was as] rough as bags essentially. The score was made up of wiggly lines and timing points, so each [player] had a stop watch and did things accordingly.” [22]

Greg Schiemer has said of it:

“I remember both Philippa [Cullen] and myself were sort of a little bit sceptical of the piece for four theremins by Larry Sitsky. That was the opening of the thing. It created such a roaring… four theremins all going [at once].” [23]


Sadly the opening concert was an object lesson in how badly demonstrations of the new and experimental can go wrong. Sitsky’s work for four theremins was supposed to have been tuned through headphones so that the music would start as a surprise but someone left the amplifier turned up and the tuning was done aloud. James Penberthey gave a performance of his computer music written in real-time for a string quartet, which was very slow and abstract, and thus not particularly engaging. Schiemer and Connor’s equipment failed which meant that their performance couldn’t even begin. [24]
For the week of the exhibition performances and demonstrations of the new equipment under development were given on a daily basis. The musical systems included further playing of Penberthy’s computer composed works, a revamped version of the Schiemer and Connor’s system, improvisational music by the Melbourne New Music Centre and Steve Dunstan with dance performances by Philippa Cullen and her company, and demonstrations of, and opportunities to try out, the Timbron system developed by Covell, Holmes and Dillon.

New Musical Instruments

At the beginning of the 1970s there were two types of synthesisers in Australia, either the Moog devices out of the US or the Electronic Music Studios’ VCS3 and Synthi A(KS) from the UK. Each consisted in a collection of oscillators having sine, triangle and square wave outputs of various frequency ranges plus filters and amplifiers that could be patched together to make a considerable variety of sounds. The frequencies of the tones and the behaviour of the filters and the amplifiers could all be controlled by voltages produced by those oscillators and other low-frequency oscillators which could be used to shape the attack, sustain and decay of any sound. But because most synthesisers used these basic waveforms to start from they had very limited timbral qualities and new techniques of sound making were needed.
There was another problem as well. Because the control of a synthesiser was mostly simply a function of turning a knob there was little palpable relation between the effort of the musician and the musical result. This tended to leave the musical performance somewhat emotionless and the audience’s visual experience somewhat vacant. Also, as far as public presentation was concerned, much of the electronic music of the time required the use of tape-recording since most synthesisers were single voiced (monophonic) and to get a range of sounds working with each other in a more orchestral form meant that the recordings had to be mixed, occasionally with live electronics, but most often in the studio and then only played back to the audience through speakers, something which did not give an exciting feel to the work. The presence of the live performer was lost along with the instrumental variations in timbre and timing that make live performance that much more intense than simple listening.
Thus there were two kinds of problem that electronic musicians sought to resolve: the performance problem and the timbre problem. These suggested two kinds of solution; new methods of controlling the synthesiser in performance were needed for new methods of synthesising sounds. The latter problem became the domain of research for a wide group of electronic music makers, both as composers collaborating with electronics specialists, eg, Roger Covell and Harvey Holmes at UNSW or Greg Schiemer and Phil Connor at Sydney University, who worked on several forms of additive (or harmonic) synthesis, or Tony Furse who worked on generating sounds with a computer. The former, performance, problem was attacked by researchers such as Covell and Holmes with their proposed touch-sensitive instrument control devices as well as by other performers working outside the directly musical arena such as Philippa Cullen with her interest in developing means by which dancers could directly control the musical production.
The two problems remain intrinsically related in performance, however the development of new types of synthesis received the most attention. New forms of instrumental control were, I suspect, conceptually harder to develop, the piano style keyboard being the most readily adaptable to the task of controlling the pitches produced with the synthesiser and instruments like the guitar also offered potential solutions. Digital as well as analogue electronic techniques could be combined in the production and modification of sounds and several hybrid machines were developed. But ultimately computerised synthesisers using graphical as well as numeric techniques for the construction of waveforms, as initially developed by Tony Furse, and rapidly followed by the use of sampling techniques as commercialised by Fairlight Instruments, became the primary new way of forming sounds.
Several of the new approaches to both sound and performance were demonstrated at the exhibition. The first we shall look at is the Timbron.

The Timbron – developed by Harvey Holmes, Harvey Dillon and Roger Covell

The Australian composer Roger Covell, electronic engineer Harvey Holmes and his doctoral student Harvey Dillon showed off their instrument, the Timbron, which attempted to solve both problems in a single package. As Covell noted:

“The object of the research, …, was to design an electronic musical instrument which could furnish players with direct tactile and audible responses in tandem and which would communicate a joint impression of appropriate movement (as measured in human terms) and consequent sound to listeners in the context of a
public performance.” [25]

The Timbron was a combination of an additive sound synthesiser and an experimental touch sensitive pitch, volume and harmonic content control board. With the synthesiser a sound could be constructed from up to 10 sine-waves derived from a single high frequency oscillator so that the sine-waves were all harmonically related. The sine-waves could be de-tuned so that they were no longer harmonic to make dissonant sounds as well. The amplitude of each harmonic was shaped by an envelope generator and voltage controlled amplifier. Plug patching panels allowed the player to select which partials were used in the synthesis of the sound.
Like all synthesisers it had its own particular sound:
“No matter how you made it inharmonic it had a certain sound, a very distinctive sound, totally recognisable and it didn’t seem to matter how you made it inharmonic you got much the same type of sound, just a very discordant sort of sound.” [26]
To play the instrument you could plug in a keyboard or the specially developed touch sensitive device which was similar to the neck of a cello. It was about ¾ of a meter long and about 75mm wide. Harvey Dillon says of it:

“Where you pressed and how hard you pressed would generate three control voltages. And then it was totally flexible as to what you then used those control voltages for, but typically you’d use the pressing motion to, say, start a note and releasing it to stop a note. You might use the [sliding] up and down motion to govern pitch, which is all we ever did, and then the cross motion was really used much more flexibly. You could then program what that cross-voltage would do in terms of the mixture of the harmonics or the rates of onsets of the harmonics or things like that.” [27]

Harvey Dillon at the Timbron synthesiser.

Fig 6: Harvey Dillon at the Timbron as set up at C&EitA. Photo: Peter West.

Both aspects of the Timbron were built in the Electrical Engineering Department at the University of NSW with contributions from Roger Covell and some of his students from the UNSW Music Department. The initial project had been to develop a performance instrument, which would translate the actions of the musician playing it into control signals that could control some type of electronic sound generating device. Given that a sound generating device with sufficient timbral variation was necessary, it was decided to build an additive (or harmonic) synthesiser as well. The performance instrument was developed by Harvey Holmes in collaboration with Covell, while the synthesiser project was developed by Harvey Dillon as a PhD project and was intended primarily as a research tool. Holmes, his supervisor, wanted him “to do some work on what the human ear could hear in terms of just noticeable jumps in pitch or just noticeable jumps in intensity” and the Timbron provided “a way of flexibly making sounds that would let me design psycho-acoustic experiments [so] that I could find out what the human ear was capable of.” [28]
Dillon and the department’s technical officers built most of the circuits. However it was also used as a test bed for which undergraduate and post-graduate electrical engineering students at UNSW could learn to build electronic circuits.
Harvey Holmes (or Roger Covell) was approached by Doug Richardson, and Dillon volunteered to demonstrate it at the exhibition, jumping at the chance to meet others who were working in similar areas of the then new media. At the opening event Covell demonstrated the touch sensitive sound board and subsequently during the week, Dillon gave a talk on “how sound is built up, and what are the constituents of sound.” He commented to me that:
The Timbron was fantastic for demonstrating that because you could, literally by pushing banana plugs in, each banana plug corresponding to one harmonic, you could build up the harmonics one at a time or take them out. People could hear for themselves [that] there’s lots of tricks the ears play. If you have three harmonics playing and then you put a fourth harmonic in, you really notice it when you hear that fourth harmonic come in, you hear the sinewave right on top of the other sound and yet when you add a fifth one in the previous four just blend together and become a single sound and your attention gets focused on the new one. So it’s quite a clear demonstration of the way the brain is wired to react to a change and focus on the change and ignore everything else that’s not changing, and of course that’s how we make it through society, through our daily lives because there’s so much sensory information coming in all the time that our brain’s very well adapted to ignore. So the Timbron was good for demonstrating both how sound is built up and some aspects about how the brain goes about analysing complex sounds. [29]
The other sound synthesis developments demonstrated at the exhibition were the devices developed by Greg Schiemer and Phil Connor to interface between the theremin, as a source of information about the position of a person in space, and a synthesiser.

Greg Schiemer and Phil Connor

Since 1972, Phil Connor and Greg Schiemer had been working with Philippa Cullen on a project using the theremin, which, since it can detect a person’s movement through space, would allow her to make music directly from her movements. The theremin makes sine-wave sounds whose frequency depends on how far the person is from the antenna. Cullen wasn’t happy with the plain theremin sound – warbling sine-waves – and wanted a wider range of control over the parameters of its sound. She asked Greg Schiemer, a friend who was studying music and fine arts with her at the University of Sydney, to help her get more interesting sounds from the theremin by hooking it up to an audio synthesiser.
Phil Connor, another friend of theirs and an electrical engineering student at the University, built four theremins which were initially used in the 1972 ballet Homage to Theremin II. These theremins used a single antenna which was either a long wire or a metal plate on a pedestal. He also designed two devices to process the theremin sound that would help in controlling the EMS VCS3 audio synthesisers that had recently become available in the Music Department. One was a “frequency-to-voltage converter” and the other was a “peak-detector”. The combination of the two devices made it possible for a dancer to “pluck a sound out of the air”. [30]
The frequency-to-voltage converter converts the frequency of the sound generated by the theremin to a voltage that is proportional to that frequency. This voltage can then be used with a voltage-controlled filter or a voltage-controlled oscillator in an audio synthesiser giving a greater range of possibilities for the sound than the original theremin sound. [31] Since the frequency of the theremin sound changes according to the movement of a person in the field of its antenna the voltage output of the frequency-to-voltage converter will change. If that movement is quick the frequency of the sound will change quickly and there will be a rapid change in the voltage from the frequency-to-voltage converter. The peak-detector detects this rapid change and converts it into a pulse [32] which then gates an audio signal or a control voltage into a further part of the synthesiser. Thus when the player or dancer made a plucking movement there would be “a peak in the frequency-to-voltage converter output, and this event would trigger an oscillator at a certain pitch.”[33] So the peak detector related the synthesiser sounds to gestural movements like reaching out to something or moving quickly in one direction and then another. The combination of the theremin and these two new devices would create “a virtual instrument around you” from which you could pluck sounds as though you were plucking the strings of guitar.

Greg Schiemer on the pedestal antenna used with Cullen’s Theremin.

Schiemer refined the peak-detector and used it with a pedestal style antenna attached to the theremin. [above] This type of antenna was the result of an accidental rediscovery of a type of theremin called a Terpsitone, that Thermin himself had invented in the early 1920s but which had been forgotten in favour of the hand playable instrument popularised by Robert Moog, of Moog synthesiser fame. Thermin’s Terpsitone was a metal plate antenna attached to the theremin. It could be tuned with the dancer standing still on it so that when they moved it begin to make sounds. Schiemer set it up so that a dancer standing on the plate on the pedestal could play the air around her. The system he built was not just responsive to hand movements but to full bodily movement.

A notional block diagram of the system that Schiemer and Connor established. The audio frequency output of the theremin is converted to a control voltage by the Frequency-to-Voltage converter and this control voltage is then converted, by the Peak-Detector, to a pulse coincident with the highest frequency made by the theremin. These signals and others from the synthesiser could be sent to the routing computer which would then send the selected set back to the synthesiser to control the sounds it was producing. The “Naked LSI” was the offending machine that failed as they began their opening concert performance. [Graphic: Stephen Jones]

The system was to be used in their presentation, A Rain Poem (1975), at Australia 75. Connor and Schiemer had been further developing the equipment they used with Cullen and her dancers and now had a mini-computer (a “Naked LSI”) which could instantly change the connections between the various devices used to process the signals from the theremin for patching into different functions in the synthesiser. This should have meant that as the performer went through each stage of the ballet different sequences of sound events could be triggered by her movements. However computers in the mid-1970s were terribly prone to failure and this brought about a small disaster for them. When they began their performance at the opening event of the exhibition the computer crashed and thus no signals could be relayed from the theremin to the synthesiser.[34] Sadly a terrible failure. Nevertheless the system was re-established later in the show, without the offending computer, and various performances with dancers were thus made possible. [35]

James Penberthy

James Penberthy was an Australian composer working in Perth who got interested in using computers in the composition of music in the very early 1970s. He applied to the Commonwealth Assistance to Australian Composers (CAAC) program for money “to computerise music for full orchestra”. He collaborated with Dan Chadwick, an English computer programmer working for ICL and newly arrived in Australia,

“and we wrote down four hundred years of music education – how long people can play instruments, when people’s lips get tired, how long each section should be played before the audience tired of it. Everything you can teach anybody, we taught this computer.” [36]

The computer was not programmed to synthesise the sounds directly or to control an analogue synthesiser to make the sounds but to write a score for music to be played by an orchestra. With it Penberthy produced nine 20 minute pieces for orchestra in a short run on the ICL mainframe in North Sydney in 1973. One of the pieces Beyond the Universe No.1 was recorded by the ABC in 1975.
Musicians playing to the score produced by a computer, probably the PDP-11 at the back, programmed by James Penberthy. [photo: Peter West]

Musicians playing the Penberthy piece by reading notations from computer screens. [I don’t know
the names of the musicians]. [Photo: Peter West]

On the basis of this work he and Chadwick wrote a program that would compose music by generating the score in real-time and present it to four ICL computer terminal screens, note-by-note, for the musicians in a string quartet to read and then play in real-time. According to Greg Schiemer it “produced numbers on the screen, and the numbers were cues for different things to happen”. However the combination of the computer processing time and the reaction time of the musicians meant that it was incredibly slow and as a piece of music didn’t really work. Larry Sitsky has said of it “That piece of Penberthy’s… Well it was a first and so it was interesting. But of course it never worked simply because the computer would flash a note and there was reaction time and so it all came out slow.”37 Nevertheless it was the first time in Australia that music had been composed directly by computer.
There were several other musicians interested in aspects of new and experimental music who were also involved. These included members of the Melbourne New Music Centre, which had been established in 1973 as an offshoot of the Melbourne chapter of the International Society for Contemporary Music. It was essentially an experimental music group with Peter Mummé (electronics), Paul Prendergast (piano), Dave Brown (horns and reeds) and the poet Chris Mann. Several members of the group came to Canberra for the week and played a lot of improvisational music with Stephen Dunstan as well as for Philippa Cullen’s dance
performances.

Stephen Dunstan

An experimental musician who was into jazz and electronic music. He brought with him an array of percussion instruments, a Synthi AKS and some small synthesisers he had built himself. His electronics was maverick and he didn’t seem to really understand how the things he was building worked but they did and they made most interesting sounds. His assembly technique was also unique, burning the component pins into sheets of perspex and then soldering wires between the pins.

Tony Furse and the Qasar M8

Also working on developing new musical instruments using micro-electronics were Tony Furse and the Fairlight Instruments people, but neither of them could make it to the show. Tony Furse had begun building synthesisers with his Qasar analogue synthesiser in1968. He was a sales engineer for Motorola semiconductors in the early 1970s and began to use their “6800” microprocessor to control the patching of the Qasar. Subsequently he developed this into a synthesiser that used stored waveforms drawn on a screen and controlled by a pair of 6800s, one which arranged the patching and getting of waveforms and other structural processes and one which played out the waveforms through a set of digital-to-analogue converters (DACs). He put 8 waveform stores and DACs into a package and called it the M8. Furse had intended to bring his M8 music computer (from which the Fairlight CMI was derived) but he ultimately didn’t because his work with Don Banks at the Canberra School of Music took priority. For detail photographs of Furse’s Qasar M8 instruments see the Museum of Applied Arts and Sciences website

Fairlight Instruments

Peter Vogel and Kim Ryrie of Fairlight Instruments had proposed to bring their prototype Computer Music Instrument to the exhibition but as it was not properly ready they withdrew.

Philippa Cullen and dancers and the pressure sensitive floors

Philippa Cullen [38] was a dancer who had been developing and using interactive technologies to allow dancers to make their own music from their performance. Originally she had used theremins and photoelectric-cells to make the sound, but she didn’t really like the quality of the sound. While studying at the University of Sydney, she had met Greg Schiemer, a composer, and Phil Connor, an electrical engineering student and she asked them to help her produce more interesting and musical sounds from the theremins and her dance. Connor and Schiemer developed several innovative electronic methods for using the theremins to control the analogue audio synthesisers they had access to, to produce sounds that were directly influenced [controlled] by the dancer’s movements. This resulted in a performance called Homage to Theremin 2 at Sydney Uni in 1972, after which she went overseas to Europe.
In 1974 she returned with new ideas for interactive devices she could use and Schiemer introduced her to Arthur Spring, another electrical engineer in Sydney, who made her a set of pressure sensitive floors. These were a set of four triangular floors each supported only by its three outer edges resting on timber beams. Each supported floor had a slit in a piece of metal mounted underneath its centre which moved between a lamp and a light-dependent resistor (LDR) mounted independently of the slit. Thus the pressure on the floor which moved the slit up and down would allow different amounts of light to shine through it onto the LDR, which produced a changing voltage depending on the dancer’s weight or impact pressure when moving over the floor (see diagram). There were four floors which could be arranged in different ways, but were mostly used in a triangular arrangement.

Block diagram of how the Pressure-sensitive floors worked.

Philippa and her dancers were invited to perform at the Computers and Electronics in the Arts exhibition. She brought along her audio synthesiser (an EMS Synthi A) and the floors. The floors enabled the dancers to control the sounds they were dancing with. Philippa and her dancers set up the floors in the main space in the exhibition area, in front of a wall of colour TV monitors that were the main presentation system for the electronic graphics and video images made and presented during the show.
Unfortunately (or rather, fortunately, as the story develops) the connection between the floors and the synthesiser failed and Philippa realised that she and her dancers (Helen Herbertson, Brian Coughran and Wayne Nicols) were not going to be able to make music this way.
It seemed that there was no more interactive possibility, so Philippa decided that they should go home. However, this brought a number of the other participants who had technology in the exhibition together to solve the problem for her.
One of the computers in the exhibition was a PDP-11/40 mini-computer, which the local branch of Digital Equipment Australia lent to the ANU Engineering Physics group for their part in the show. The computer came with an analogue-to-digital converter (A/D) to convert signals from the analogue world for its use and the Engineering Physics group had their newly built computer-image frame-buffer installed in it as well. They had intended to use this equipment to demonstrate their satellite mapping [Landsat] projects. However, responding to the situation and to the common interests among the various groups of people who were showing their projects off, they connected the output from the pressure sensitive floors up to the computer via the A/D and wrote a program by which the computer could read the changing voltages coming from the floors and use this data to draw a “history” of the dancer’s movements in their frame-buffer.
Stage and performance area of Computers and Electronics in the Arts. Philippa Cullen and company on the pressure sensitive floors. In the background on the stage is Stephen Dunstan with his instruments.

Shot of half the stage and performance area of Computers and Electronics in the Arts. Philippa Cullen and company on the pressure sensitive floors. In the background on the stage is Stephen Dunstan with some of his instruments. On the right is a wall of monitors on which the Bush Video videos and live camera images of the Cullen performances were shown.

The frame-buffer was synchronised to John Hansen’s video synthesiser so that it could be shown on the bank of colour TV monitors on the stage. Thus the RGB image from the framebuffer could be sent to the video synthesiser where it was combined with camera images of the dancers, other video synthesiser effects and video feedback and then displayed on the monitor bank. [39]

R-L: Philippa Cullen, Helen Herbertson and Wayne Nichol on the pressure sensitive floors.

Once the floors had been hooked up to the computer and through it to the video system, the dancers had to learn how to use this brand new medium with a visual output rather than the sound output that they had been used to. They moved to music played by the Melbourne New Music Centre group and Steve Dunstan on his percussion instruments and his hand-built audio synthesisers, which also supplied audio modulation for keying (matte) control of Hansen’s video synthesiser. Each of the four floors was given a different function in the computer’s construction of the images.

“One floor controlled horizontal position of a moving spot, a second controlled vertical position, a third altered the spot size and the fourth floor selected the spot colour.” [40]

As the traces of the dancers movements accumulated they were progressively erased [faded selectively] from the image so that it was replaced slowly over an interval of about a minute and new movements could be seen without the picture becoming too messy. The dancers “spent many hours learning how to use this new medium before they could concentrate once again on interpreting the music they were dancing to rather than on producing the desired image. We found that simple and direct relationships between the movements on the floors and the behaviour of the moving spot were required. [41] Thus, leaning to the right on one floor moved the spot to the right while leaning forward on another floor caused the spot to move down.” [42]
Philippa Cullen watching the screens displaying the output of her dancers on the pressure sensitive floors.

Philippa Cullen watching the video from John Hansen’s video synthesiser linked to the ANU computer that was reading the output of the floors.

Performances on this integrated system drew large crowds and opportunities to play with the system were taken up at every possible instance. All this happened almost overnight and illustrates just how much interest there was in integrating all sorts of disparate systems as well as how willing the exhibiting artists and engineers were to do everything possible to make this integration work. The whole process formed an excellent example of the radically interesting results that collaborations between artists and scientists can bring about for themselves and for audience members, who, as likely as not, knew nothing of the drama that went on behind the scenes.
Cullen and her three accompanying dancers watching the video output from the pressure sensitive floors.

Philippa Cullen (l), Benny Zable? (sitting), Helen Herbertson, Wayne Nicols (r) in front
of the screens displaying floor history and colourised camera images of the dancers.
[Photograph: Peter West]

ANU Engineering Physics

The scientists and engineers who were a part of this group were investigating ways in which computers could be made more accessible to people. The approach they took in this study of Human Computer Interaction (HCI) was to build devices that would make asking the computer to do things and getting its responses [answers] easier for the user [i.e., more user friendly]. They were particularly interested in the use of computers as teaching aids and among other projects used them to teach visually impaired students to write.
They developed several interesting technologies that would assist people in interfacing with [talking to] the computer and showed two of them off at CaEitA. These were an output device; a frame-buffer, which allowed the computer to display colour pictures [image] and an input device; a digitising [drawing] tablet, which allowed people to draw pictures or write naturally in the computer.
In 1975 you communicated with computers via punched paper-tape and a keyboard known as a Teletype. One way to get pictures from a computer was by printing alphabetic characters in the right places on the page (this is now known as ASCII art) and the other way was to draw lines on a sheet of paper using a plotter or onto a special kind of monitor that worked like an oscilloscope. Computer terminals consisting in a keyboard and a TV monitor that could only display alphanumeric characters were becoming available, however they could not display pictures. But none of these techniques allowed the computer to display images made of areas filled with colour.

Close up of the traces of the dancer’s movements made by the ANU computer linked
to John Hansen’s video synthesiser. [Photograph: Peter West]

To make a coloured picture, e.g., a photograph or an abstract image, the computer has to place a “pixel” of data in a block of memory: the frame-buffer [image-buffer], the locations of which represent every point of the display area of the screen. This is known as a bit-map and is what your PC does when it displays a VGA image [e.g., a digital photo] on your computer monitor.
The ANU Engineering Physics group had recently built a frame-buffer that could store [hold] a 512-pixel by 512-line image, with each pixel able to represent one of the three colours; Red, Green and Blue (RGB), at two levels of brightness each [see table]. The frame-buffer could then be read out to display the eight possible colours at two levels of brightness (see table) that these pixel values would make on an RGB TV-style [raster] monitor. [43]
RGBI
0000BLACK
1000RED
0100GREEN
0010BLUE
1100YELLOW
1010MAGENTA
0110CYAN
1110GREY
1001LIGHT RED
0101LIGHT GREEN
0011LIGHT BLUE
1101LIGHT YELLOW
1011LIGHT MAGENTA
0111LIGHT CYAN
1111WHITE
There were no memory chips available in those days and all computers used tiny rings of magnetic material known as ferrite cores in which the magnetism could be made to point south or north to represent zeros and ones. [44] The memory used in this device came as two boards each with 32K x 18-bit words or 589,824 memory cores.
The Engineering Physics group were using the frame-buffer in developing techniques for automated mapping based on Landsat [satellite] images. The picture from the satellite would be stored in the frame-buffer and the computer could then assist in its analysis to discover regions of different contrast that might represent water resources or different kinds of crops, etc. [45]
The second device was a digitising [drawing] tablet. It consisted in a pen connected to two strings forming the apex of a triangle. The string was held in tension by elastic cords in a box at the rear of the tablet [so that it couldn’t become loose]. The pen could be pulled around the drawing area and as the string was drawn out from, or pulled back into, either side of the box the distance that it was pulled out to was measured by a multi-turn potentiometer. The voltage from this potentiometer and this allowed the computer to calculate exactly where the pen was on the tablet. [46] The pen could be used to follow curves in photographically recorded data from scientific experiments, or to draw pictures with enough accuracy that the device was actually used to draw maps at the Department of National Mapping. The pen could also point to menu patches on the tablet to initiate routines in the software.

Two images (by unknown artists) drawn with the string digitiser using McLeod and Ellyard’s SKETCH software.

The Canberra branch of the Digital Equipment Corporation had lent the Engineering Physics Group a PDP-11 mini-computer for use in the exhibition. The computer included an Analogue-to-Digital Converter and a GT40 display monitor with a light pen. The public could play some of the earliest computer games with this display including Lunar Lander. The Engineering Physics Group attached the two devices discussed here to the mini-computer so that they could be used by the public visiting the exhibition. The frame buffer and the digitiser tablet were used with a drawing program, called “SKETCH” that allowed users to draw their own pictures onto a 12” colour TV monitor, and it was also used in conjunction with the PDP-11’s analogue-to-digital converter to take voltages from Philippa Cullen’s pressure sensitive floors and make pictures that represented the history of the dancer’s movements across the floors.

John Hansen and the video synthesiser

Hansen was born in Denmark and arrived in Australia when his parents emigrated here when he was 6 years old. He grew up in the Victorian country town of Hayfield where his father had the radio repair shop. So John grew up surrounded by electronics and it was only natural, then, that he ended up studying communications engineering at RMIT in Melbourne. His first job was in the PMG research laboratories [the Telephones were under the control of the Post Master General’s Department before privatisation] while still studying and he graduated in 1968, taking a job in the Zoology Department at Monash University in 1969.
While at RMIT he was introduced to electronic music through a talk by Keith Humble, one of Australia’s first electronic music composers. He met Steven Dunstan at this event and started building noise generators and and other electronic sound making devices including a theremin. This was the Psychedelic era.

John Hansen at his Video Synthesiser

He drifted away from electronic sound into visuals and shortly after finishing his studies he began doing liquid light shows at the Melbourne University Union rock and roll nights with Hugh McSpedden of the Edison Light Show Company. Hansen built motorised coloured oil/water devices that were driven by stepper motors and then built several kinetic sculptures including one for the Captain Cook Bicentenary celebrations in 1970.

Video synthesiser image from John Hansen’s “Pong” computer game circuit based video synthesiser.

Meanwhile, while working at Monash University in the Zoology Department making telemetry equipment to track animals in their wild habitat, Hansen came across the Light Emitting Diode (LED), a new device which had just come onto the electronic market. He added these to the telemetry transmitters he was using so that the scientists could track animals at night. He realised that these little flashing lights that used almost no battery power would make great jewellery and began to make pieces for his friends. He had an exhibition at Realities Gallery in Melbourne which was quite successful and nearly began a career in making electronic jewellery.
The Zoologists were also using video equipment to record animal behaviour which gave him his first access to video recording equipment. He came across the book Guerrilla Television, which galvanised many an artist interested in the media at that stage, and got involved with the Melbourne Access Video and Media Co-op. His active interests in video got the better of him and having been interested in building electronic sound devices he began to bend them towards electronic visual synthesis. One of his first image generating machines was based on a circuit for turning a TV into an oscilloscope. He then added a rotating colour wheel in front of the screen which was synchronised to the audio signal to make colour Lissajous figure
images.
In 1974 Hansen received a grant from the Film and Television Board of the Australian Council for the Arts to build a video synthesiser. He

“bought a colour television set, a Philips [video-]cassette recorder and a Grass Valley video mixer [with mixes and wipes, a chroma-keyer and colour background generator]. With the change left over I built a console and developed a lot of electronic circuitry, primarily audio-synchronised.” [48]

Video synthesiser treated version of the Australia 75 poster originally designed by Stan Ostoja Kotkowski.

Among the circuitry was a pattern generator that produced grids of lines and dots and a device based on
“circuitry that was just coming out for playing games on television. [This was] a ping-pong circuit, which was simply a way of making square pixels slide across the screen and when it hit something, depending on its angle and trajectory and speed it would deflect in a proper fashion in different direction, and using that principle I built up about eight of these and used them to deflect objects or key sources across the screen.” [49]
The video synthesiser was also
“heavy on colourisation as well as audio, and [there was a] black and white camera which could be colourised. The console was full of peg boards [with] real time, live switching to get different sequences.” [50]
As he was completing the synthesiser he was invited to show it it at Computers and Electronics in the Arts.
“I remember we first got there and we were all setting up and we all had our individual spaces, and I was next to [the ANU Engineering Physics group]. Their PDP-11 and all the monitors were being set up, and Philippa’s floors and I don’t think there was much real dedicated “you will do this” or “this will happen there”. Everything was very fluid, it just seemed to amalgamate together as a continuous process through the whole show and by the end of it we had some real connectivity between all of us. Particularly between Philippa, Chris [Ellyard] and myself. There were some very good cross connections with Philippa’s floors modulating some of my patterns” [’51]
And he also took the output from Engineering Physics’ computer video display of the signals from Philippa’s floors. Hansen used his synthesiser to mix together synthesised image modules, colourise the black and white camera images and feed them to the bank of colour monitors on the stage.
In many ways the really important thing that happened at the Computers and Electronics in the Arts show was the developments that took place from the interactions that happened between participants As Hansen noted, the “Coupling [between systems] that we did at Australia ’75 occurred fairly easily with the types of designs that we had in those days.” [52]

Bush Video

In 1975 video art was an experimental form. There was no established canon and Nam June Paik’s work was commonly thought of as the most interesting and adventurous. There was some conceptual work made with video, e.g., the work of Peter Campus, and some performance work e.g., by Vito Acconci, Bruce Nauman or Carolee Schneemann. The exploration at the time was very much finding the boundaries of what video was. A great deal of video use was in community activist situations where a very discrete group, often only one person, could shoot events and interviews almost without being noticed. You could do things with video that you couldn’t do with film because it was a real-time stream of representation.
You couldn’t edit with video but you didn’t have to wait for the processing of the film. You could shoot video without having to go through all the rigmarole of lighting and the clapper board and all the other set up that had become necessary with film. Video lacked colour in its very early forms but the sound was automatically in sync. You could walk in and shoot video, edit in-camera and then walk out again or play it back on the spot or put it to air that day (although the television news could do that if they had the lab next door). Video was more like electronic music and was easily produced with abstract electronic effects mixed with live dancers or performance or appropriated visual material from NASA films and other things.
Two versions of the Boomerang image made by Doug Richardson, the second with feedback added by Bush Video.

The equipment Bush Video used to make their video began with the Portapak, shooting day-to-day events at the Nimbin festival (1973), but then ranged from video feedback, to the use of an early computer graphics system built by Doug Richardson to images made with oscilloscopes and, towards its closure, to images produced with modified monitors. To make video feedback you point a camera at a monitor and play the camera back into the monitor so that it is looking at itself. This produces a loop of video signal with a delay in it caused by the time it takes to get the monitor to display the image, not very long really.

Bush Video became the main group of people involved with video as an experimental art form in the early 1970s in Australia. It was set up by the experimental film-maker, Mick Glasheen, who had already been using video since 1968. He was approached by the organisers of the Nimbin Aquarius Festival – to be held in the May university holidays of 1973 – to document it and provide video access to festival participants. Glasheen and Joseph el Khourey, another filmmaker he had recently met, joined forces with the Australian Union of Students who were producing the Nimbin festival, and applied for funding to build a cable network through the town of Nimbin and to set up a video centre in the town. The idea was that participants of the festival could record events and these video recordings could be distributed via cable to the many gathering places throughout the town and the Festival grounds for other participants to watch at later times. This was the very first experiment in cable television in Australia.

Bush Video had gathered a large group of other like-minded artists, filmmakers and technologists both before, to help set up, and after the Festival. When they arrived back in Sydney they moved into the studio space in Ultimo that Glasheen had established before hand, and this became the studios and gathering place for all the members of Bush Video. More people got involved including a young electronics enthusiast known as Ariel. It was a loose collective organisation built out of a spirit of collaboration and explored most of the areas in which video has been used since.

For many of the members of Bush Video the main interest was in the use of technology in the making of art. Glasheen had already been working with colour video and had recognised its electronic potential in the glowing flows of video feedback. He had also been recording to film small pieces of computer animation made with the assistance of Doug Richardson who had built a computer graphics facility based on a DEC PDP-8 minicomputer at the University of Sydney. Once Bush Video returned from Nimbin the electronic project took off. A wall of TV monitors and a video mixing capability was established in the studio; with cameras for recording everything from dance and music performances to video feedback and Lissajous figures on oscilloscopes. Glasheen describes the attraction that this electronic video art had for him:
“I was drawn to the organic nature of it, … it seemed to me that video and electronic art is really an image of … energy! It’s live light energy! Electromagnetic fields that are made visible! … You know, there’s this glowing cathode tube with an image there that was alive. So I just felt that there’s life there, this new life-form, that could be felt when you’re doing video feedback.” [53]

Frames from MetaVideo Programming by Bush Video, 1973. Note the combination of the line
inscribed across the screen, driven by audio waveforms and computer controlled waveforms, and the
video feedback giving an illusion of depth. [Courtesy: Bush Video]

The studio process often involved nights of live mixdowns with as many videotapes and electronic image generation devices as possible brought into play over a period of recording.54 These sessions could be quite wild and lots of interesting images were produced though few coherent finished works were made. However each session produced material that could go into some of the works that individual members were working on for themselves. One of the more finished works, MetaVideo Programming, was commissioned by the National Gallery of Australia for its collection of experimental art. This and several other pieces were shown by Bush Video at the Computers and Electronics in the Arts exhibition.
Like all the other participants in the exhibition Bush Video were invited to contribute by Doug Richardson. They had a van in which they could carry their equipment and a Geodesic Dome that Glasheen had built that they lived in when travelling. The Dome was set up in Commonwealth Park by the lake in Canberra as accommodation for the core members who came to Canberra during Australia 75.
Bush Video also brought their collection of monitors and added them to six colour monitors that had been lent for the exhibition. These formed a wall of monitors on which all the video work, tape playbacks of Bush Video pieces and live performances mixed either by Bush Video or though John Hansen’s video synthesiser were shown to the public. The exhibition ran just as colour television was launched in Australia and for many members of the public it was their first experience of it. Bush Video had made hours of abstract video which Ariel and el Khourey played back on a wall of monitors built on the stage in the exhibition. They kept up an almost continuous stream of wildly abstract video for the audience and other participants alike.

This was a period of high experimentation – the artists working with electronics did not know what could not be done, and had access to the smallest details of the technology, every single op amp, if in the analogue, and AND gate, if in the digital, or writing one’s own assembler code if working directly with a microprocessor.
Video too was highly experimental. The technology available to artists was still new, very primitive and unstable. The kinds of images that were being generated were often a result of weirdness in the equipment and could be often very difficult to get a good recording of.
The use of video feedback for the Bush Video artists was a representation of the way in which a technology might almost become a living thing. This could be seen in the way the video feedback became self-sustaining and often quite organised in spiraling forms that inevitably led to comparisons with the drawings of living forms that D’Arcy Thompson had illustrated in his book “On Growth and Form” The slightest change of lighting or camera setting or the images that it was mixed with could trigger the feedback off into new forms.
The mixer, and what it could do using wipes and luminance keying, meant that layers of images: oscilloscope displays, Lissajous figures, animated wire-frame geometric drawings done on Doug Richardson’s computer and the streaming echoes of visual feedback could be composited together into collections of images redolent with ideas about the geometry of space and consciousness. These could then be incorporated into video that might be thought of as semi-documentary and now had whole new areas of meaning asserted in it. The overall search was for a new language for the new ideas that came with cybernetics, geodesic domes, Buckminster Fuller and Marshall McLuhan and of course the newly accessible electronic technologies.
Here video is not a narrative thing but an attempt to find a new language of images as coherent living ideas – images formed minds and directed ways of thinking and this led to ways of thinking about consciousness and memory and new approaches to metaphysics [this was after all the 70s when everybody was starting to think a little differently after the encouragement from the hippies in the 60s]. It was felt that somehow the images being produced were a direct emulation of the flow of consciousness and the deeper understandings that may have been perceived through meditation and other spiritual activities [and, of course, LSD]. This was a period of what we might think of as “Spiritual Technology”.

Stan Ostoja-Kotkowski
The Laser Chromason and the Theremin

Stan Ostoja-Kotkowski had been a Creative Arts Follow at Australian National University (ANU) in 1971-72 and was invited to show some of his electronic and laser artworks in the exhibition (CaEitA). He showed two of his theremin pictures and the two Laser Chromasons that had been built at the ANU.
He was a Polish immigrant who studied painting in Melbourne after he arrived in 1950. He then lived in Central Australia for many years and became deeply interested in the light of the desert. At one stage he asked a remarkable question:
“Why couldn’t a painting change its shape, form and color? … It seemed to me that you could achieve this by using light as a tool and that the closest thing to the source of light we know and can handle confidently is electronics.” [55]

So he became interested in light as a medium and this led him to make electronic drawings using TV tubes and otherwise explore the possibilities of electronics in art. In 1967 he discovered the intensity of laser light and the purity of its colours, and he began to use lasers in set designs for plays and operas that he worked on in Adelaide. He also took lots of photographs, which he loved to manipulate and use in audio-visual projection shows with the lasers. He discovered that by shining the laser through broken glass he could bend and disperse it across the screen, making beautiful traces of pure colour.

In 1969 a circuit for a theremin was published in Electronics Australia magazine. The theremin had been invented by a Russian radio engineer in 1920 and could be played as a musical instrument. It is made from a pair of matched radio-frequency oscillators, which are mixed together. One of the oscillators has a fixed frequency and the other has a piece of wire acting as an antenna. As a person comes near this antenna wire it varies the frequency of the second oscillator so that the two oscillators now have different frequencies and produce an audible beat frequency that changes as a person moves around it.
Because a theremin responds by making sounds to the presence of a person near it, it inspired a number of artists to explore the possibilities that it offered [others who used theremins were Philippa Cullen and Greg Schiemer]. Stan was making paintings in a variety of materials including Op-Art designs sandblasted into mirror-finish stainless steel plates and abstracts in enamel paints baked onto steel. Being interested in using electronics to make his paintings more active he realised that he could use the theremin to make his paintings make sounds as people came up to them. An engineer in Adelaide named Phil Storr designed a theremin for him that he could use with his paintings. Because the frames and picture areas of some of his paintings were steel, he could run a wire from the theremin into the picture frame and the whole painting would then act as the antenna for it. As a person came up to the painting it would then respond with sounds made by the theremin. As one reviewer said of these paintings in a 1977 exhibition:

“Approach [the painting] and it growls and grunts. The closer you get, the more excited it becomes. If you touch its surface it lets out a high-pitched scream.” [56]

Stan Ostoja-Kotkowski sitting at the controls of the Laser Chromasons with a theremin
being played by (unknown) to the right.

Among the many projects he engaged in during his time as a Creative Arts Fellow at the ANU, he designed a pair of devices he called Laser Chromasons which Malcolm Gamlen and Terry McGee, who were on the engineering workshop staff of the ANU Research School of Physical Sciences, then built for him.
The Laser-Chromason is a 60-cm sphere of Perspex standing on a plinth which contains the electronics. Six coloured lamps display a field of colour pierced by the sharp intensity of the shutter-modulated, reflected and dispersed red laser light shifting and forming on a translucent screen. [57] The screen and a series of rings around it were sandblasted into the interior of the sphere giving a translucent surface.
Two small red lasers and six coloured lamps were housed inside the sphere. One of the lasers was reflected onto the screen by a rotating disk made from 6 segments of distorted glass, like the glass used in a bathroom shower wall and chosen for its different irregularities. A stepper motor rotated the disk and the laser beam was cut on and off by a mechanical shutter controlled by the sound. The other laser was reflected off a small mirror with three legs that were glued to the voice coils of three tiny speakers so that the laser beam vibrated about the screen in time with the music.
The electronics were housed in the plinth. Inputs from a microphone (for ambient sound) or from a synthesiser or tape-recorder, could be applied to a bank of electronic filters which divided the sound up into bands of audio, much like the tone controls do [an equaliser does] in your hi-fi. The electronic signals coming from these filters were then patched to the lights and the laser mirrors and shutter to control the light sources in different ways according to the sound. [58]

Footnotes

1: Weibel, Peter, et al., Rosen, Margit, ed. 2011. A Little-Known Story about a Movement, a Magazine, and the Computer’s Arrival in Art: New Tendencies and Bit International, 1961-1973. Karlsruhe, Germany; Cambridge, Mass. USA: ZKM; MIT press :https://mitpress.mit.edu/9780262515818/a-little-known-story-about-a-movement-a-magazine-and-the-computers-arrival-in-art/

2: Jones, Stephen, “Frank Eidlitz: Design and the origins of computer graphics in Australia”, Australia New Zealand Journal of art, [ANZJA], Vol. 10, Issue 1, 2009 | The Conference

3: About whom I, unfortunately, know very little.

4. http://scanlines.net/person/michael-glasheen or http://www.mediavr.com/drawingontheland.htm

5: There isn’t much on Tom Barber on the web but some of these geodesic objects designed by him as fly-thrus are pretty good: https://www.youtube.com/channel/UCoccJkxrbZuyrlaaecUF5GA

6: Some material on Joseph el Khouri is at https://scanlines.net/object/inside-memory-theatre/

7: I don’t presently have a web site address for Brown. Her Roktowa site < http://www.roktowa.org > is not
working properly and is very hard to read.

8: He is a renowned photographer now: https://www.smh.com.au/national/photographer-captured-divine-details-of-faces-and-places-20201227-p56qam.html also http://www.jonnylewis.org

9: https://vimeo.com/62882103 Interview with Jon Lewis, Fat Jack Jacobsen and Anne Kelly, about Bush
Video on the ABC’s GTK 17th September, 1973.. Sadly the original 4:3 format has been cropped top and bottom to 16:9 leaving some of the original shots lacking background detail. See also https://vimeo.com/65006786

10: https://metaform.tv

11: http://www.auzgnosis.com/art/artist_cv.htm

12: Doug Richardson, Programme for “Computers and Electronics in the Arts”, Australia 75, Canberra. March 7-16, 1975.

Similar Posts

Leave a Reply