Open Data: are local councils getting the message?

Liam Maxwell GaaP Seminar 5 February 2015/Tim O'Riordan ©2015/cc-by-sa 3.0

Liam Maxwell GaaP Seminar 5 February 2015/Tim O’Riordan ©2015/cc-by-sa 3.0

I attended a highly inspirational talk at the Ordnance Survey last Thursday. The key speaker, Chief Technology Officer at the UK Government’s Cabinet Office, Liam Maxwell, spoke on “Government as a Platform” (GaaP) under the auspices of the Southern Policy Centre to a distinguished group including local and national politicians, academics, CEO’s and researchers. Maxwell is in charge of streamlining the online provision of government services and has overseen the move from the old direct.gov.uk service to gov.uk – promoting their key message that they are providing “[d]igital services so good people prefer to use them”. How successfully this is happening can be observed by exploring gov.uk’s performance data.

So what is GaaP and should we mind it?

The driving force behind GaaP is the Web and how it enables governments, local and national, to have a better understanding our needs, and enables us to oversee, interrogate, and participate in our government in new and potentially more effective ways. In addition to “building digital services that are simpler, clearer and faster to use”, at the heart of GaaP is shared information. Although managed by different Cabinet Office team, open data plays a significant part in lifting the lid on the workings of government. Data that was once squirreled away in Whitehall filing cabinets and town hall basements are now being made available on the Web in an unprecedented move towards greater transparency and openness in government.

In this new arrangement, government, as a source of data, becomes the ‘guide on the side’ – an enabler rather than the leader of civic participation – and as active, Web-connected citizens we now have the tools to find solutions to problems that affect us. As public.resource.org assert in their ‘8 government open data principles’: “[o]pen data promotes increased civil discourse, improved public welfare, and a more efficient use of public resources.” At a time of increasing constraints on public spending, the benefits of open data, open standards and open source tools (like the Government Data Service open source platform) have the potential to effective positive change in how we use government services.

There are some substantial barriers to overcome. Real concerns exist about the effective and secure management of data, as have surfaced in the debate on the government’s care.data project. Can we sure that those publishing data do so without inadvertently releasing our personal information? This requires very clear understanding of the dangers of re-identifying anonymised public data, and effective controls on how data are released for publication.

In addition, there is a lack of public awareness about, and the necessary skills and knowledge to use open data effectively. This will come, with the bedding-in of new Computer Science curriculum, and through interventions like those run by the Ordnance Survey, but there is still a great deal to do before we start to see tangible benefits in the delivery of government services.

Close to home, local government are starting to adopt more transparent practice, but progress is slow. My local authority, Southampton City Council, has released some financial data – some of which could be considered as ‘3 star’, and anyone with time and motivation to find their way around MS Excel (with the NodeXL template), or Tableau software will find something of interest. Cambridge City Council have published a considerable amount of data (some 4 star), and across the country there’s a patchy, but growing amount of local government data available for all of us to interrogate.

This is no small undertaking, council budgets are being squeezed at an unprecedented level, and doing something new and with uncertain outcomes is a difficult sell at the best of times. Creating exemplars of good practice is important, and the body local government looks to for advice and direction on website development, the Society of Information Technology Management (SOCITM) has created an ‘Innovation Platform‘ that promote the open government data agenda and help local councils translate policy into action.

The gap between our current local government services, and how they could be better designed and managed in future, is important to us all. There are already inspiring developments – as well as the SOCITM initiative, the Local Government Association’s open data repository, the work of the Open Data Institute, and the Government Digital Service are supporting the move to more open government. The key message is that open data, open standards and open tools provide us with opportunities to develop modern, responsive public services, and to participate in improving our local economies.

Tagged with: ,

It was thirty years ago today…

anti-cuts march, Bournemouth, 21 Nov 1984

Anti-cuts March/BPCAD ©1984

By way of contributing a little something to the public record, I’ve published an edited version of a video I made with the help of fellow students in 1984, while I was in the final year of a film production course at Bournemouth and Poole College of Art and Design (BPCAD – now the Arts University Bournemouth). At the time most students were entitled to grants to support their education, and when the government suddenly announced a cut in this financial support, the National Union of Students set about galvanising students into a radical response. If I recall correctly, one day in early November I got into college at my usual time, heard there was going to be a meeting to discuss what ‘action’ to take, decided that this was a story worth following, got permission from the tutors to take out cameras, lighting etc, and started recording what followed.

It turned out to be an interesting ride. During the following weeks there were a lot of meetings, a march on Bournemouth town centre, a 30,000 strong rally at Queen Elizabeth Hall in London followed by a flaming torch-lit march on parliament and Downing Street by irate, chanting students. The press reported that “180 students were arrested after part of central London had been brought to a halt during the evening rush hour. Three bridges, Westminster, Waterloo and Lambeth, were closed to traffic” (The Guardian, 29 November 1984).

The upshot was that, amazingly, we (the students) won. To quote The Guardian again: ” What Sir Keith, with rare brilliance has managed to do is to construct a broad coalition of profound hostility”  (28 November 1984). Under pressure from Tory backbenchers, the government backed down. A parliamentary briefing paper published in 1997 also puts it very well: “Th[e] announcement gave rise to a storm of protest, focussed mainly on the imposition of tuition fees, which mobilised students, parents and backbenchers. On 5 December 1984 Sir Keith Joseph responded by announcing that the proposed contribution to tuition fees would be withdrawn”.

You may notice that this video isn’t particularly high quality. This is because it was shot on Umatic video tape and 16mm film, with sync and non-sync sound, and originally edited on a Panasonic Umatic tape editing system. It was then copied onto VHS tape and from there onto DVD, and finally edited and encoded using Lightworks software. So, there’s been some image degredation over time.

The video features:
Paul Needham, President of National Union of Students at BPCAD
Vicky Matthews
Suri Krishnamma
Cathy Wilson, Parliamentary Candidate for the Labour Party, Isle of Wight
Vicky Phillips, President (Welfare), National Union of Students
An unidentified representative from the National Union of Mineworkers
An unidentified union leader (possibly David Lea, Assistant General Secretary of the Trades Union Congress)
Rodney Bickerstaffe, General Secretary of the National Union of Public Employees

The crew:
Editor’s assistant: Richard McLaughlin
VTR Operators: Ian Campbell and Sue Kennett
Sound Assistants: Ian Salvage and Liam Lyons
Camera Assistants: Cameron Whittle, Paul Metherall and Keith Mack
Lighting: Suri Krishnamma and Ian Kelso
Sound: Ian Campbell and Ian Salvage
Camera Operators: Ian Kelso, Robert Williams, Andrew Hewstone and John Bennett
Director and Editor: Tim O’Riordan

I’ve made an attempt to contact those who appear in the video, but as I’ve lost touch with pretty much everyone who took part, it has proven impossible to find out if anyone has any issues with sharing this. So, if anyone in the video is concerned about what they see here, please let me know.

What else was happening on 28 November 1984:

Radio Times listing for BBC1 (BBC Genome project)
November, 1984 in the UK (Wikipedia)

Tagged with: , , ,

The smallest, biggest film festival in the world!

CFF2012

Couch Fest Bitterne Park 2012

Relax on our sofas and watch some of the best new short films from around the world. On Saturday, 6 December 2014 the 6th annual Couch Fest Film Festival will be held in residential homes and alternative venues around the globe – from Hong Kong to Kathmandu, from Berlin to Brasilia – and Bitterne Park Baptist Church Hall in Southampton.

With a little help from my family, I’ll be hosted this unique event locally – it will not happen online and will not be televised. Couch Fest is a film festival that replaces traditional cinema halls with cozy residential venues and aims to bring movie lovers together in a comfortable, relaxed setting.

Founded in 2008 by Seattle filmmaker, Craig Downing, this unique worldwide festival has built a passionate following thanks to its reliably high quality, entertaining film programs, and its open-minded “do it yourself” ethic. Says Downing, “I’m excited to provide others the chance to watch grand short films whilst sitting on their rump in living rooms all over town! What better way to get out and meet your neighbours?” No wonder Wired describe it as “the world’s most cozy film festival”.

The screening at Bitterne Park presents a unique 90 minute family-friendly selection including Oscar nominated short Do I have to take care of everything?, Pink Helmet Posse, and more than 15 other brilliant international short films – many of which are still exclusively playing at some of the top film festivals in North America and Europe. So why not join Couch Fest Bitterne Park on Facebook or Eventbrite?sas-cff-3

Bitterne Park Baptist Church Hall is a few steps away from the Wellington Road stop on the no. 7 bus route linking Southampton city centre to Townhill Park. The venue is accessible to disabled guests. Please note that street parking is limited.

Entry is free and doors open at 7pm, with the programme starting at 7.30pm. Tea, coffee, soft drinks and cake will be available.

Venue: Bitterne Park Baptist Church Hall, Wellington Road, Southampton, SO18 1PH (Location Map)

Please email me with any questions, or propaganda@couchfestfilms.com if you’d like to contact the festival founder or programmers.

Exams, Dissertation Topic and Authorship Issues

University of Southampton libraryUniversity of Southampton library/Jessie Hey © 2006/CC BY 2.0

Phew! Examinations and essay writing are now over for my MSc in Web Science course and I’m now preparing to start my summer dissertation project. Two supervisors have agreed to oversee the progress of my work, the main research question of which is: can comments related to Web-based learning objects be used to good effect in the evaluation of these objects?

This is essentially a learning analytics research project and my initial plan is to collect a suitably large dataset from FutureLearn MOOC’s and analyse this data using social network analysis and sentiment analysis tools. I aim to examine the learning objects and the topics discussed, the language used and what the sentiment polarity is towards these topics and objects. The objective is to identify characteristics of distinct participant roles, look for similarities in language and evaluate these in terms of established pedagogical frameworks (e.g. Dial-e, DialogPLUS) to support relevant schema.org descriptions (e.g. educationalFramework).

I’m in the process of working out exactly what I’m going to do and how I’ll do it. Because it involves ‘scraping’ web sites for people’s opinions, there are ethical issues as well as theorectical and practical hurdles to be overcome. However, I believe that this type of data can be profitably used to assist with learning object evaluation and think it will be useful to find out if there’s any evidence to back this opinion up.

The ‘authorship issues’ referred to in the title of this post aren’t strictly related to my studies as they have arisen from my previous employment as an Advisor at Jisc Digital Media. However, having just discovered that articles I wrote for my old employer are now being attributed to another person, I have been pondering the ‘web sciencey’ issues raised by this – like trust, provenance, the use of metadata and the nature of authorship in the digital age. By way of an example of what’s happened, here’s a link to an article I wrote about Khan Academy videos in March 2011 as preserved on the Internet Archive’s Wayback Machine (correctly attributed) – and here it is currently published on the Jisc Digital Media Blog (incorrectly attributed).

Joe Chernov discusses the issue of byline re-attribution in his post: Creators vs Corporations: Who Owns Company Content? All very interesting, and a topic that I will return to in a future post.

Tagged with: , ,

Open Hypermedia and the Web

Tim Berners-Lee

Tim Berners-Lee/Silvio Tanaka ©2009/CC BY 2.0

Tim Berners-Lee, the main architect of the World Wide Web (W3), developed the system while working for CERN, the European Organisation for Nuclear Research in the late 1980s. W3 was developed to overcome difficulties with managing information exchange via the Internet. At the time, finding data on the Internet required pre-existing knowledge gained through various time-consuming methods: the use of specialised clients, mailing lists, newsgroups,hard copies of link lists, and word of mouth.

At CERN, a large number of physicists and other staff needed to share large amounts of data and had begun to employ the Internet to do this. Although the Internet was acknowledged as a valuable means of sharing data, towards the end of the 1980s the need to develop simpler, more reliable methods encouraged the creation of new protocols using distributed hypermedia as a model.

Developments in Open Hypermedia Systems (OHSS) had gained pace throughout the 80s; a number of stand-alone systems had been prototyped and early attempts at a standardised vocabulary had been made [1]. OHSS facilitate key features: a separation of link databases (‘linkbases’) from documents, and hypermedia functions enabled for third party applications with potential accessibility within heterogeneous environments.

Two key systems; Hyper-G, developed by a team at the Technical University of Graz, Austria [1], and Microcosm, originating at the University of Southampton [5] were at the heart of pioneering approaches to hypermedia. Like W3, they were launched in 1990, but within 10 years both were outpaced by the formers overwhelming popularity. Ease of use, the management of link integrity and content reference, and the ‘openness’ of the underlying technology were contributing factors to W3’s success. However, both Hyper-G’s and Microcosm’s approach to linking media continue to have relevance for the future development of the Web.

The Dexter Hypertext Reference Model

In 1988 a group of hypertext developers met at the Dexter Inn, New Hampshire to create a terminology for interchangeable and interoperable hypertext standards. About 10 different contemporary hypertext systems were analysed and commonalities between them were described. Essentially each of the systems provided “the ability to create, manipulate, and/or examine a network of information-containing nodes interconnected by relational links.”[6]

The Dexter Model did not attempt to specify implementation protocols, but provided a vital reference model for future developments of hypertext and hypermedia. The Model identified a ‘component’ as a single presentation field which contained the basic content of a hypertext network: text, graphics, images, and/or animation. Each component was assigned a ‘Unique Identifier’ (UID), and ‘links’ that interconnected components were resolved to one or many UIDs to provide ‘link integrity’.

The World-Wide Web

By the mid-80s Berners-Lee saw the potential for extending the principle of computer-based information management across the CERN network in order to provide access to project documentation and make explicit the ‘hidden’ skills of personnel as well as the ‘true’ organisational structure. He proposed that this system should meet a number of requirements: remote access across networks, heterogeneity, and the ability to add ‘private links’ and annotations to documents. Berners-Lee’s key insights were that ”Information systems start small and grow”, and that the system must be sufficiently flexible to “allow existing systems to be linked together without requiring any central control or coordination”.

His proposal also stressed the different interests of “academic hypertext research” and the practical requirements of his employer. He recognised that many CERN employees were using “primitive terminals” and were not concerned with the niceties of “advanced window styles” and interface design [2].

Towards the end of 1990, work was completed on the first iteration of W3, which included a new Hypertext Markup Language (HTML), an ‘httpd’ server, and the Webs first browser, which included an editor function as well as a viewer. The underlying protocols were made freely available and within a few years the technology had been used and adapted by a wide variety of Internet enthusiasts who helped to spread W3 technology to wider audiences.

Microcosm

Aimed at providing solutions to perceived problems in contemporary hypermedia systems, Microcosm was launched as an “open model for hypermedia with dynamic linking” [5] in January 1990. The Microcosm team identified that existing hypermedia systems, although useful in closed settings, did not communicate with other applications, used proprietary document formats, were not easily authored, and as they were distributed on read-only media, did not allow users to add links and annotations.

While Microcosm used read-only media (CD-ROMs and laser-discs) to host components within an authored environment, it separated these ‘data objects’ from linkbases housed on remote servers. This local area network-based system allowed all users, authors and readers, to add advanced, n-ary (multi-directional) links to multiple generic objects. Microcosm was also able to process a range of documents and had some potential for interoperability due its modular structure, which enabled it to offer a degree of interoperability with W3 browsers [7].

While recognising the significance of W3, the Microcosm team identified some weakness, especially in the manner HTML managed links. Rather than storing links separately, W3 embedded links in documents which resulted in the inability to annotate or edit web documents, and suffered from ‘dangling’ or missing links when documents were deleted or URLs changed. In addition, HTML was limited in how links could be made, there were a small number of allowable tags and only single-ended, unidirectional links could be authored. To counter these link integrity issues the Microcosm team developed the Distributed Link Service (DLS) which enabled the integration of linkbase technology into a W3 environment [3].

Using the DLS, W3 servers could access linkbases and enabled user authored generic as well as specific links. Generic link authoring allows users to create links that connect any mention of phrases within sets of documents, and allows bi-directional links within documents.

Hyper-G

Hyper-G offered a number of solutions to the linking issues identified by others working in hypermedia systems development. In a similar manner to Microcosm, Hyper-G stored links in link databases. This allowed users to attach their own links to read-only documents, multiple links to documents or anchors within text or any other media object could be made, users could readily see what objects were linked to, and links could be followed backwards so users could see “what links to what”. Unlike Microcosm, the system use an advanced probabilistic flood (‘P-Flood’) algorithm which managed updates to remote documents and linkbases ensuring link integrity and consistency essentially informing links when documents have been deleted and changed.

Like W3, Hyper-G was a client-server system with its own protocol (HG-CSP) and markup language (HTF). Hyper-G browsers integrated with Internet services W3, WAIS and Gopher, supported a range of objects (text, images, audio, video and 3D environments) and integrated authoring functionality with support for collaboration.

Hyper-G was a highly advanced system that successfully applied key hypermedia principles to managing data on the Internet. As web usability expert, Jakob Nielsen asserted, it offered “some sorely needed structure for the Wild Web” [8].

Why W3 Won

Despite acknowledged limitations, W3 retained its position as the defacto means of traversing the Internet, and continued to grow and spread its influence. The reasons for this are relatively straightforward.

W3 was free and relatively easy to use; anyone with a computer, a modem and a phone line could set up their own servers, build web sites and start publishing on the Internet without having to pay fees or enter into contractual relationships.

Although limited in terms of hypermedia capability, these shortcomings were not serious enough to prevent users taking advantage of its data sharing and simple linking functions. Dangling links could be ignored, as search engines allowed users to find other resources, and improved browsers allowed users to keep track of their browsing history, and backtrack through visited pages.

In contrast, Microcosm and Hyper-G were developed, in their early stages at least, as local systems. This enabled them to employ superior technology to manage complex linking operations much more effectively than W3. However, this focus led to systems that were significantly more complex to manage than W3, and presented difficulties for scaling up to the wider Internet. In addition it was not clear which parts, if any, were free for use. Both systems promoted commercial versions early in their development which had the unintended effect of stifling adoption beyond an initial core group of users.

Future directions

W3 has developed into a sophisticated system that provides many of the functions of an open hypermedia system that were lacking in its early stages of development. Attempts to integrate hypermedia systems with W3 [3],[4],[9] and find solutions to linking and data storage issues influenced the development of the open standard Extensible Markup language (XML) and XPath, XPointer and XLink syntaxes. While HTML describes documents and the links between them, XML contains descriptive data that add to or replace the content of web documents. XPath, XPointer and XLink describe addressable elements, arbitrary ranges, and connections between anchors within XML documents respectively.

XML may be combined with Resource Description Framework (RDF) and Web Ontology Language (OWL) protocols to store descriptive data that produce web content in more useful ways than with simple HTML. These protocols allow web content to be machine-readable, allowing applications to interrogate data and automate many web activities that have previously only been executable by human readers. These protocols are seen as precursors for the ‘Semantic Web’, a new development of W3 that links data points with multi-directional relationships rather than uni-directional links to documents [10].

References

[1] Keith Andrews, Frank Kappe, and Hermann Maurer. The Hyper-G Network Information System. In J. UCS The Journal of Universal Computer Science, pages 206–220. Springer, 1996.

[2] Tim Berners-Lee. Information Management: A Proposal. CERN, 1989.

[3] Les A Carr, David C DeRoure, Wendy Hall, and Gary J Hill. The Distributed Link Service: A Tool for Publishers, Authors and Readers. 1995.

[4] Hugh Davis, Andy Lewis, and Antoine Rizk. Ohp: A Draft Proposal for a Standard Open Hypermedia Protocol (Levels 0 and 1: Revision 1.2-13th March. 1996). In 2nd Workshop on Open Hypermedia Systems, Washington, 1996.

[5] Andrew M Fountain, Wendy Hall, Ian Heath, and Hugh C Davis. Microcosm: An Open Model for Hypermedia with Dynamic Linking. In ECHT, pages 298–311, 1990.

[6] Frank Halasz, Mayer Schwartz, Kaj Grønbæk, and Randall H Trigg. The Dexter Hypertext Reference Model. Communications of the ACM, 37(2):30–39, 1994.

[7] Wendy Hall, Hugh Davis, and Gerard Hutchings. Rethinking Hypermedia: the Microcosm Approach, Volume 67. Kluwer Academic Publishers Dordrecht, 1996.

[8] Hermann Maurer. Hyperwave – The Next Generation Web Solution, Institute for Information Processing and Computer Supported Media, Graz University of Technology, [Online: http://www.iicm.tugraz.at/hgbook Accessed 5 December 2013].

[9] Dave E Millard, Luc Moreau, Hugh C Davis, and Siegfried Reich. Fohm: A Fundamental Open Hypertext Model for Investigating Interoperability Between Hypertext Domains. In Proceedings of the Eleventh ACM on Hypertext and Hypermedia, pages 93–102. ACM, 2000.

[10] Nigel Shadbolt, Wendy Hall, and Tim Berners-Lee. The Semantic Web Revisited. Intelligent Systems, IEEE, 21(3):96–101, 2006.

Tagged with: , , , , ,

5 interactions between the Web and Education that are changing the way we learn

Using MACs in the Computer Laboratory/University of Exeter ©2008/CC BY 2.0

The way we learn and the tools we use to extend our capacity for learning have always been closely interrelated. Over 2000 years ago wax tablets enabled learners to show their working, 500 years ago the introduction of movable type made books more accessible, 150 years ago the postal system provided the infrastructure for distance education, the introduction of radio and television services established the means for widespread educational initiatives, and personal computers and portable video making equipment were widely adopted by educators in the 1970s and 80s. Since the emergence of the Web 25 years ago, both learners and educators have exploited the potential of the underlying technologies and the services developed with them to support and change the way we think about learning in many fundamental ways.

1. Technology

The educational value of the Internet was recognised at its inception and computing science academics working in universities and colleges keenly adopted the technology to share data among themselves and with their students. However, as the number of resources hosted on networked computers increased they tended to become ‘siloed’ and difficult to find. The invention of the Web fundamentally changed this environment and the way people interacted with the Internet. The underlying protocols that govern the way the Web works are based on linking electronic documents over disparate networks using web browser applications. By making the protocols open to everyone at no cost the Web’s founders allowed people to build upon the technology, for example one of the earliest adaptations introduced the search function that enables users to discover resources significantly easier than with earlier technologies.

In the mid-1960s Gordon Moore identified an interesting fact about the processing power of computers – that it appeared to double every two years. Once this filtered through to computing hardware manufacturers and as the demand for personal computers increased, this became something of a self-fulfilling prophesy – one that has led to the development of ever sophisticated, ever smaller, less expensive computing devices. From laptop computers to smartphones to tablets to Google Glass and Radio-frequency identification (RFID) devices, this phenomenon has placed powerful, mobile computing into the hands of more than 1.5 billion people worldwide allowing learners and educators to access significantly more information than has been available to any previous generation.

2. The Evolving Web

The early Web gave learners and educators a taste of what could be achieved in this new environment. Learners could access information that had previously been ‘hidden’ in libraries and archives and educators were able to either convert existing instructional progammes, quizzes and exams into Web-enabled resources or develop new assets that guided learners through a set of learning objectives. But this essentially static, ‘read-only’ Web allowed little opportunity for learner interaction, collaboration and sharing, all vital components of the learning process. This began to change with the introduction of Wiki’s in the mid-90s.

These web applications enable users to comment on or change the text on a web page that had been written by others, and provide a platform for group collaboration and sharing. In addition to inspiring the creation of the global knowledge bank that is Wikipedia, wiki’s encapsulated many of the features of a ‘read, write and execute’ web – what is commonly referred to as Web 2.0.

The ability to readily create a presence on the Web via blogs, social networking, and video sharing sites has created a dynamic resource that continues to make radical changes to our learning and teaching experience. Web 2.0 applications have been embraced by learners and educators at all levels. YouTube and other video sharing sites provide a platform for user-generated how-to videos, software advice, and exemplars of arts and science disciplines (e.g. The LXD: TED Talk, Periodic Videos and Khan Academy), that inform and inspire millions of informal learners as well as students in formal education. The social networking site, Facebook is used by teachers to facilitate collaborative group work (e.g. in Music Technology at Bridgend College), and a large number of user-generated resource sharing sites (e.g. Flickr, SlideShare and Storify) and cloud computing services (e.g. Google Drive, WeVideo, and Pixlr) enable learners and educators to extend their tools and resources beyond the traditional classroom.

3. Theory

The network of collaborative and productive spaces enabled by Web 2.0 has inspired an invigoration of constructivist educational theory and its application to a range of online learning spaces. Learners and educators are able to communicate, provide feedback and collaborate in order to co-create the learning process using a variety of free-to-access synchronous and asynchronous technologies.

In constructivist theory learning takes place primarily through interaction between learners and between learners and teachers. Teachers assess the suitability of technologies in various settings and judge what are called their affordances for learning, that is, the essential features of a technology and what the interface allows learners to do. For example the affordances of Facebook may be the opportunities to support collaboration, a shared group identity and understanding of knowledge. Once the teacher is familiar with the environments they can orchestrate learning in a manner that supports learners through the process (i.e. ‘scaffolding’).

The Web has also revived interest ‘autonomous education’, highlighted by interest in the ‘Hole in the Wall’ experiments undertaken by Professor Sugata Mitra in the late 90s. These experiments involved observing children’s use of Web-connected computers placed in open spaces in rural settings in India and demonstrated that children were able to learn how to use the devices, to find information and teach others how to use the computers without any instruction or guidance.

While supporting opportunities for self-learning, the Web also provides a platform for delivering timely instruction and feedback that can shape learning outcomes using operant conditioning methods. This approach to teaching is based on behaviourist theory which claims that learning can be reinforced through the use of rewards and punishments. In Web-based learning environments this is normally applied through the use of ‘gamification’ techniques such as the awarding of virtual badges for achievement or through the provision of a visual indication of learner progress (e.g. a ‘progress bar’).

4. Pedagogy

New technologies inspire new approaches to teaching, and the Web has made a huge impact in this area. Formal education has adopted new approaches including the use of Virtual Learning Environments (VLEs), e-Porfolios, and Massive Open Online Courses (MOOCs) which support new blended learning methods. Course materials, formative assessments, lecture recordings (including video, audio and synchronised slides), and assignment information and submission form the backbone of VLEs used in most educational institutions. In addition, many institutions encourage their students to develop their own ePortfolios – a self-edited collection of coursework, blog posts and other educational activity that reflects the students’ progress, experience and knowledge gained during their time at a university of college. These are often integrated with (although kept separate from) the more formal VLE, and the institutions’ Careers Service and used as an addition to a students’ Higher Education Achievement Record.

VLEs are primarily used to support ‘bricks and mortar’ educational, they are not viewed as a replacement for class-based learning, but are ‘blended’ with traditional methods. MOOCs on the other hand appear to be heralding a paradigm shift in the delivery of formal learning. This relatively new web-based form of distance learning emerged in 2008 and has its antecedents in Open Educational Resources initiatives. MOOCs typically provide opportunities for an unlimited number of learners to experience a short college or university level module (normally around 6 weeks in length), delivered using synchronous and asynchronous tutorials, web-based video, readings and quizzes. At the end of the course learners are required to produce some form of relevant feedback that demonstrates their achievement, which is then assessed by their course peers or course tutors.

5. Openness

The early decision to open Web technologies for all was inspired by research sharing practices in academia, and as the Web has developed it has been used as a platform for sharing ideas, research and teaching. Open Access to research papers that have traditionally published by academic journals and available at a high premium, has the potential to transform learning and research. Making academic research available to everyone via the Web provides opportunities for wider access to learning for the poor and those living in rural areas, and improves the uptake of research outputs.

Similarly Open Educational Resource initiatives are providing opportunities for teachers to share teaching materials, allowing others to reuse and repurpose content. Issues regarding ownership of content have been overcome in many instances through the use of Creative Commons licenses – a scheme that allows content owners to clearly show how they would like others to use their material.

The increasing ubiquity of Web technologies combined with the culture of openness promoted by its founders of the Web, and increasing availability of low cost Web-enabled devices are transforming opportunities for learning and teaching, and are changing the way education is perceived. Despite inequality of access, the ‘digital divide’ and ‘web literacies’, the opportunities for accessing education are greater today largely due to the Web.

Tagged with: , , ,

Encouraging a corporate open data culture

Introduction

The Royal Society’s influential paper on the use and misuse of risk analysis asserts that “[a]ny corporation, public utility or government will react to criticism of its activities by seeking…new ways to further the acceptable image of their activities” (Pearce, Russell & Griffiths, 1981). In the past decade the timely availability of relevant data has become widely acknowledged as having “a huge potential benefit” to the practice of risk assessment and management (Hughes, Murray, & Royse, 2012). Partly in response to climate change concerns the importance of access to data is acknowledged at a local, national and international level. To enable and encourage the wider use of public environmental and health related data, initiatives like the European Union’s INSPIRE Directive are establishing standardised, legally enforceable data infrastructures (European Union, 2014), and many governments have adopted ‘open data’ strategies.

While the benefits of open data has been recognised and is being acted on in the public realm, despite the good intentions of some corporations (Ghafele & O’Brien, 2012, Alder, 2014) most commercial organisations have been slow to respond. The principle barriers to data sharing in the corporate sector have been identified as resulting from concerns over intellectual property, commercial confidentiality, and ‘cultural’ issues. While not offering any actionable recommendations to tackle these issues, the UK Government’s recent ‘Foresight’ Review asserts that “a more holistic approach to risk analysis…is undoubtedly needed” (Hughes et al, 2012).

Risk analysis and management of uncertainty demand an interdisciplinary approach (Rougier et al., 2010: 4) and the purpose of this essay is to follow this course and explore the social science disciplines of Anthropology and Economics in order to propose a combined approach that includes relevant methods from both fields. While the evolution of these disciplines has followed different trajectories, and underlying methodological differences can be identified, the increasingly blurred boundaries within science ensure that the identification of discrete ontologies is problematic. The move towards transdisciplinarity involving as it does the sharing of research tools and theoretical perspectives, and the emergence of new multidisciplinary fields (e.g. economic anthropology) provides a fertile field for developing ‘Mode 2’ research propositions (Nowotny, 2001).

Specifically, this essay explores the factors influencing data sharing in the hydrocarbon exploration industry (HEI) where potential exists for the timely publication of data gathered from monitoring hydraulic fracturing activity.

Background

Hydraulic fracturing, more widely known as ‘fracking’, is a technique that has been used to release and collect methane gas from shale rock for more than 60 years. The fracking process employs explosive charges and specially formulated chemical fluids pumped under high pressure to help release gas for extraction. This process takes place more than 1,500m below ground level, at a significantly greater depth than typical coal mining activities (Mair et al, 2012; Wood, 2012). The British Geological Survey estimate that “resources of 1,800 to 13,000bcm [billion cubic metres]”, the equivalent of more than 23 years supply at current UK consumption rates, are “potentially recoverable” from sites in northern and southern England (POSTbox, 2013). However exploration is required in order to discover if this potential is realisable.

Public concerns about fracking focus on the possibility of increased seismic activity, leakage of chemical contaminants into the water table, air pollution caused by the leakage of methane, and the continuing reliance on carbon resources with potentially harmful effects on the world’s climate (Mair, et al, 2012; Kibble et al, 2013; Ricketts, 2013). These concerns have been expressed in public demonstrations against the process (The Guardian, 2013), and the introduction of moratoria on exploration in a number of countries. These public expressions of concern are viewed by the HEI as a significant additional risk to an already hazardous enterprise (Wood, 2012).

In the UK, all industrial activities are subject to health and safety audits and some involve continuous, around the clock monitoring. For example in the HEI, Cuadrilla Resources commission Ground Gas Solutions Ltd. to provide monitoring services (Cuadrilla, 2013) which aim to: “…provide confidence to regulators, local communities and interested third parties that no environmental damage has occurred.” (GGS Ltd., 2013). Some of this data are made public via reports to regulatory authorities which can be subject to significant delay, are written in formal, technical language, and are not easily accessed by the general public (Boholm, 2003: 172). This essay proposes a interdisciplinary research methodology to explore the potential for allowing open access to real time (or close to real time) monitoring data that could help to alleviate some public concerns.

Economics

Whether analysing large scale issues of national or global significance (macroeconomics) or focussing on the actions of individuals and local groups (microeconomics), the study of economics is defined by its evaluation of human behaviour in relation to the exploitation and control of scarce resources. In all disciplines there are varieties of opinion on the efficacies of different theories; in economics this can be illustrated by reference to the divergent theories regarding government intervention in markets advocated by Keynesian economists and those following the Chicago School. In practice economists prioritise their research by balancing the availability of data and the effectiveness of its collection against the needs of their audience (e.g. government agencies and corporations) and the strength of their beliefs in the determining factors that influence the behaviour of individuals in society (Kuznets, 1978). For example, when seeking solutions to economic depression a Keynesian may advocate increased government spending, whereas a Chicago School economist would suggest increased money supply, allowing a free market to correct itself.

Key concepts in economics include the evaluation of the cost and benefits of future economic activity and the maximisation of utility. Predicting the outcomes of activities with varying levels of uncertainty involve the collection of relevant data, risk analysis and the evaluation of statistical probability. In high-risk investment industries the effective collection and analysis of data is vital, not least in hydrocarbon exploration, where the large rewards for discovering untapped, scarce resources are balanced by the huge investments involved in exploration. The assessment of risk plays a significant part in evaluating the potential costs and economic value of recoverable hydrocarbon resources and multidisciplinary teams comprising geologists, statisticians, legal experts, engineers and economists are engaged within the HEI to ensure that rational choices are made, resources are used to their full potential and that risk is kept ‘as low as reasonably practical” (HSE, 2014). A range of complex and exhaustive appraisal models are used in evaluation, the core aims being to use data as efficiently as possible and minimise subjectivity in order to reduce uncertainty when ascertaining the economic risks and rewards (Nederlof, 2014).

The evaluation process can be broken down into three key stages:

  • Resource evaluation. This is normally undertaken using a “petroleum system model” and is based on the assumption of five independent geological processes that facilitate hydrocarbon accumulation: generation, migration, entrapment and retention and recovery (Häntschel & Kauerauf, 2009). Data for each of these processes are collected using a range of tools (e.g. Geographic Information Systems software) (Hood et al, 2000).
  • Monte Carlo statistical analysis. This uses computer-based statistical analysis tools (e.g. Palisade Corporation, 2014) to process input variables many thousands of times using different random choices to create vectors of equally probable outcomes. A typical output from this process is a range of expectation curves which display the predicted outcomes in ascending order of probability (Nederlof, 2014).
  • Economic appraisal. Essentially this involves translating the predicted amount of recoverable resources into a cash value. Considerations of the value of the resource need to account of inflation, predicted future prices, regulation, safety, health and environmental considerations and exploitation contracts and licences. All of these factors are subject to variations over time (e.g. possibility of a future ‘windfall tax’) and economists typically provide a number of alternative scenarios indicating the probabilities arising from the interplay of different variables (Haldorsen, 1996).

While the statistical analysis of this detailed mesh of quantitative data is a powerful tool in helping decision makers in the HEI, economists understand that care must be taken in reaching definitive conclusions and in making predictions. A key concern is that primary data may be treated without a suitable understanding the historical background, conventions and collection practices that influence the production of this data (Fogel, Fogel, Guglielmo & Grotte, 2013: 96). An appreciation of the contribution of anthropological research may be helpful is this area.

Anthropology

Although anthropologists “cast their net far and wide” (Eriksen, 2004: 45) in order to provide context for their observations, their work is undertaken primarily through close interaction with individuals and the groups they inhabit. In-depth, structured interviews are used extensively and the key research method is ‘participant observation’ – the goal being to extensively record everyday experiences as an aid to gaining new knowledge on the existence (or otherwise) of ‘human universals’ (shared characteristics).

Developing from the study of ‘exotic’ cultures in the 19th and early 20th century, it is perhaps inevitable that with a field as large as the scientific study of humanity at all times and in all places would branch into a heterogeneous collection of sub-disciplines – ‘urban anthropology’,  ‘design anthropology’, ‘theological anthropology’, ‘digital anthropology’, and so on.  Although there probably is an ‘anthropology’ for every area of human activity, each with its own unique ontology, the features that distinguishes this social science from other, similar, disciplines (e.g. sociology) resides primarily in its approach to data collection and interpretation. Unlike researchers in most other disciplines, anthropologists immerse themselves within the social and cultural life of their subjects, living closely ‘in the field’ with the people they are studying. The purpose is to attempt to see the world from the subjects’ point of view, and to provide a rich, contextualised, ‘thick’ description and localised interpretation of this perspective (Geertz, 1994: 140).

Data collection follows a systematic approach which typically focuses on particular fields of study, primarily: kinship, reciprocity, nature, thought and identification. For example an anthropologist may explore how the community they are researching view reciprocity; how gifts are exchanged, goods are paid for, and how the community view property, as well those things that cannot be exchanged or given away (Weiner, (1992: 33). Comparisons can then be made between groups with a view to establishing and understanding similarities and differences, and ultimately identifying characteristics which are unique to specific societies and those that are universally shared (Goodenough, 1970).

Within the terms of this essay,  perhaps the most appropriate sub-discipline to explore in some detail is where anthropologists are commissioned by commercial organisations to describe and analyse ‘organisational culture’ – what is typically referred to as ‘organisational anthropology’. Anthropologists working in the commercial sector are usually engaged in ‘problem-oriented’ research, attempting to uncover the root of human relations issues identified by corporate leaders (Catlin, 2006). Within this environment they apply anthropological methodologies to particular fields of interest, for example: work processes, group behaviour, organisational change, consumer behaviour, product design and the effects of globalisation and diversity (Jordan, 2010). The focus of this research is placed on talking with employees and management to reach descriptions and interpretations of the overall culture as well as any existing sub-cultures, with the aim of providing recommended courses of action that are relevant to the organisations’ strategic goals.

In addition to work in the corporate sector the anthropologists’ practice of long-term engagement is also useful to public policymakers where collected data can be extremely useful in tracking changes in over extended periods of time (Perry, 2013). Within the HEI, anthropologists explore the relationships between companies, state organisations and communities (Stammler & Wilson, 2006), the cultural implications of the regulation of risk (Kringen, 2008), the environmental impact on communities and their resilience to exploration (Buultjens 2013),  as well as land use and the social organisation of the workforce (Godoy, 1985).

Finally, Monte Carlo analysis is not simply the preserve of economic analysts. The method is used in other social sciences including social anthropology (Tate, 2013), linguistics (Klein, Kuppin & Meives, 1969), education (Pudrovska & Anishkin, 2013) and public health studies (Morera & Castro, 2013) and applied to statistical analysis when evaluating and predicting incomplete or missing data.

Proposal for an interdisciplinary approach

This essay has explored the relevant theories and research themes that influence those involved in economic decisions in the HEI and how anthropology approaches the study of cultures. A key element in the context of this essay is the evaluation of risk: how does the HEI balance risk and reward in the search for scarce, economically recoverable resources, and what can anthropology offer in understanding the human perception of risk. Central to the risk question, both evaluation and perception, is how data is used to aid economic decision making on the part of corporations, and to enable society to compare potential hazards and manage health and safety, and environmental concerns.

When experts analyse risk in the HEI, the terms they use to define the costs and benefits of a particular course of action are highly relevant to decision makers, but may have little meaning to “people in social settings” (Boholm, 2003: 166). While the maximisation of utility through rational choices motivates the statistical analysis of potential hydrocarbon fields, from the anthropology perspective this approach fundamentally misrepresents the essentially cultural construction of risk perception (Bourdieu, 2005: 215) and has “limited relevance for explaining how people think and act in situations where there is an element of uncertainty” (Boholm, 2003: 161).

Although generally useful, there are two essential problems with this approach. Firstly, anthropologists are divided on the concept of ‘culture’. In its plural form it can be seen as divisive and not conducive identifying human universals, definitions of ‘culture’ are often vague and do not acknowledge the permeability of boundaries in human society, or the possibilities for internal variation (Hannerz, 1992: 13). Secondly, when explaining ideas of risk and hazard, anthropology tends to favour definitions based on objective social phenomenon (e.g ‘taboo’ in traditional societies is viewed as a means of maintaining social order – Tansey and O’Riordan 1999: 74) rather than an individuals’ subjective consideration of risks based on available evidence (Slovic, 1987: 280). However, by taking care when making generalised statements regarding ‘culture’ and by exploring how people “identify, understand and manage uncertainty in terms of knowledge of consequences and probabilities of events” (Boholm, 2003: 166) – and by acknowledging the relevance of expert risk analysis, a consensus definition of risk can be expressed as: “a situation or event where something of human value (including humans themselves) has been put at stake and where the outcome is uncertain” (Rosa 1998: 28).

Managing risks at both a corporate and community level entails the timely communication of relevant data in a form that can be readily understood by all parties. In the current setting economic analysis can provide some highly relevant expert insight into risk in the HEI, and anthropological research can describe and interpret the context of the perception and consideration of risk and uncertainty.

In essence this combined approach would involve primary anthropological research methods including in-depth structured interviews, and participant observation within the HEI and affected communities. The outputs of these studies would be used to inform a more nuanced approach to uncertainty and risk in economic modelling and the use of computational methods (including Monte Carlo analysis) to predict the effects of social vulnerability and environmental protest activity on hydrocarbon exploration. By adopting this form of research methodology it is proposed that an effective approach to communicating risk can be formulated which may encourage a more transparent publication of data and help the HEI “to further the acceptable image of their activities” (Pearce, et al., 1981).

References

Tagged with: , , , , , , ,

Open Access: it’s not rocket science

On the eve of his appearance to give evidence at the House of Lords Science and Technology Select Committee on Open Access in November 2013, OA Evangelist, Professor Stevan Harnad spoke about his concerns following the UK government’s’ apparent u-turn on Green Open Access. Acting on the Finch Report on Open Access to scholarly articles, the government (and Research Councils UK) had accepted what Harnad described as an “astonishing recommendation”, essentially proposing to pay publishers considerably more than necessary for Open Access.

Harnad kick-started the OA debate in 1994 with the publication of his ‘Subversive Proposal’, suggesting that scholarly articles should be made freely available for all via the Web. Physicists and computer scientists had been doing this for years, he argued, and it was about time the rest of the world did the same. The benefits were obvious: academics don’t publish for profit – they do so for impact and usage, to gain uptake and application of their ideas, and the evidence shows that OA articles are cited more than non-OA.

Subsequent to the ‘Proposal’, the School of Electronics and Computer Science at the University of Southampton used ePrints to create the world’s first OA repository, and mandated OA for all of its journal articles. In 2003 the UK House of Commons Science and Technology Committee supported this approach, the research councils also adopted watered down Green OA policies and universities and institutions around the world began to follow suit.

However despite the growth in Open Access in recent years, students and academics continue to access scholarly articles via their institutions’ subscriptions to peer-reviewed journals. These annual subscriptions can amount to many hundreds of thousands of pounds, and even the most well-endowed universities (e.g. Harvard) are unable to subscribe to as many journals as they would like. There are workarounds to deal with this – contacting published academics directly, for example  – but, Harnad asserts, it is more cost and research effective for institutions to adopt Green OA policies and make articles freely available, once they have completed the peer review process.

Although hailed as a “balanced package”, the adoption of the Finch Report’s recommendation that additional payments be made to publishers to cover the costs of ‘Gold’ OA (where publishers make articles open after an embargo period) is seen by many advocates of Green OA as a retrograde step. However the Higher Education Funding Council for England’s policy proposal  which promotes immediate-deposit (i.e. Green OA) as a condition for future Research Excellence Framework eligibility appears likely to be adopted. Should this happen as Harnad hopes, Finch’s recommendation is likely to be sidelined.

Speakerthon 2014: Reusing BBC audio

Last Saturday I attended Speakerthon, a collaborative web-enhancement event organised by BBC R&D and Wikimedia UK. The aim of the day was to interrogate the BBC Radio 4’s permanently available archive (e.g. The Woman’s Hour Collection), select clips of notable people speaking and add them to Wikipedia. Wikimedia UK’s Andy Mabbett thought up the idea and has spent the past 2 to 3 years convincing BBC decision makers of the efficacy of opening up their archive. In addition to applying open licences to BBC content, providing a rich layer of information to Wikipedia entries, and adding good quality linked data to the Web, the visibility of the archive is greatly enhanced, and tagged clips will be used to teach applications to automatically identify voices in the archive (e.g. The World Service Radio Archive Project), thereby making BBC researchers jobs a great deal easier.

The day started with a briefing session. We were shown how to use the BBC ‘Snippets‘ software (sadly only made available to us on the day), and what type of clips to listen out for. Finding 20 to 40 second clips of individuals talking, preferably about themselves or their field of work, without interruption or any background music was frustrated on some programmes by over enthusiastic interviewers who would insist on butting in, whereas others (like Desert Island Discs) proved to be a goldmine of useful clips.

Once a clip was identified and selected, ‘Snippets’ created a URL, which we manually added to a Google Docs spreadsheet along with the persons name and gender, Wikipedia URL, and programme archive URL. This was then picked up by the BBC editorial team, who checked ‘compliance’ (i.e. the suitability of the clip and any outstanding copyright issues), trimmed and edited the clip (using Audacity a free audio editor), encoded it to the open source .flac format, and uploaded it to Wikimedia.

At the time of writing about 100 clips have been uploaded out of the 300 created on the day. I added eleven clips to the Google Docs spreadsheet, three of which have been uploaded to Wikimedia.  So far I’ve embedded voice clips and metadata for Owen Hatherley and Claire Skinner, and three of the clips: Guglielmo Marconi, his second wife Maria Cristina Bezzi-Scali and John Scott-Taggert (the first person to receive a radio message from a ship in distress) are awaiting confirmation of their copyright status.

It was a real joy to take part in this collaborative cyberspace project and to be in at the start of a project that has the potential to have an effect considerably greater than the sum of its parts.

See also: Speakerthon: Sharing Voice Samples – Marieke Guy, Open Education Working Group

Please note: While capturing audio from the BBC’s web archive and uploading it to Wikipedia (or anywhere else) is relatively straight-forward, doing so without the express permission of the BBC infringes their copyright.

Getting Started with WordPress

Found Blur Motion

Found Blur Motion/ilouque ©2011/CC BY 2.0

Earlier this month, Dr Lisa Bernasek, Academic Coordinator for Languages, Linguistics and Area Studies approached us with a request for assistance. Like many academics who recognise the benefits blogging in support of learning, Lisa had included a blog post writing requirement as part of her new The Arab World (in and) Beyond the Headlines module. She asked students to contribute posts outlining their reflections on developments in the Arab world to the modules’ blog site, the aim being to help them organise their thoughts on the topic, give feedback (both student to student and student to tutor), and to keep a record of their progress. The problem was most of her 60 students had no experience of using blogging applications and only a few had used WordPress (the University’s blogging app of choice).

As WordPress is widely used it’s very likely that students will come into contact with it, and use it, in their future employment. In fact I re-built my own business web site using WordPress earlier this year and regularly use it to blog about my work.

To help her students get started I worked with Lisa to devise a 30 minute introduction to WordPress and produced an online resource to provide post-workshop support. I ran 3 workshop sessions in computer rooms at the Avenue campus during the third week of the module. The workshops covered:

  • logging in,
  • writing your first post,
  • using categories and tags,
  • a brief overview of the law and copyright, and
  • an introduction to embedding media within posts.

Feedback from the workshops was good, and we expect to run similar sessions with the next cohort taking this module.

If you need assistance getting started with WordPress, or using other digital and social media, please get in touch.

Tagged with: , ,
Top
%d bloggers like this: