Open Assessment ioe12

The area of assessment, recognition and reward for skills development and learning beyond formal education has been a difficult one. However, the Digital Learning and Media Competition 4, sponsored by the MacAuthur Foundation (Press Release) in association with HASTAC and Mozilla (who are developing the infrastructure) are creating the concept of Badges (for Lifelong Learning) to demonstrate competency levels, to reflect abilities and skills developed in informal (and formal) learning and training and act as a validated indicator. These badges representing achievements can then be displayed on social media websites, profiles, reputation, and other places with an association to a person’s professional development.

On the launch video we hear from the primary partnership sponsors. Additionally, there are presentations from NASA, and US Department of Education represntatives.

I must admit that as I was watching the video there was a feeling of it being a US national initiative rather than something global and international. However, this particular point was raised and it was stipulated that competition and Open Badge infrastructure was an international affair; indeed there was activity taking place in Japan as well.

Mozilla is developing the ‘Open Badge’ infrastructure, which defines a set of standards and the technical building blocks to enable others to create and use badges for their websites. The hope is that such a system will help learners, educators, and employers all reach their goals.

Now this raises an interesting point for me about the relationship between employers and traditional education institutions. Previously, I was all for the idea of students creating ePortfolios to demonstrate their abilities to employers. I even did some investigatory work about which environment would be most convenient for students and prospective employers. However, I attended a presentation where an employer said that they had little interest in (time to) look at such material. They were reliant on the institutions being the gatekeepers of the accreditation and validation. However, there are some areas where professional standards are seen as a much better indicator of ability; CISCO and Microsoft qualifications are primary examples. With the concept of separating validation and accreditation of skills from the course (and potentially the institutions) which are easy for employers to access and understand, this could well be a significant (some might say paradigm) shift in educational provision.

Mozilla is also interested in developing people’s skills in web development. As a comment, this is a great initiative on the part of Mozilla who want to maintain the ethos of the Web being ‘open’. If people train to develop skills in Web development in an open way, assisted by Mozilla with the School of Webcraft, then they as the next generation of developers are much more likely to want to uphold and defend that ethos. Thus developing a self perpetuating, self sustaining community.

So now we are seeing the potential for Open Online courses to provide the recognition of achievement some participants might require. Some examples of such take up by Open Online course providers are the Peer-2-Peer University (P2PU), which Mozilla is working in conjunction with to deliver the School of Webcraft, and this very course Openness in Education.

There is the potential that other MOOCs might use the Open Badge infrastructure approach in the same way.

Advertisements

OpenCourseWare ioe12

OpenCourseWare (OCW) is the provision of course materials provided openly on the Web and pioneered by MIT.

I recall the time of the MIT announcement as I worked in a Computing Services department of a UK university the Deputy Director at the time saying that MIT was putting all its courses online. I tried to make the distinction that it wasn’t their courses but their courseware that was being made public and that there was much more to a course than its content. Fundamentally, education is more that just content, it is the added value above and beyond the content; it is the interaction of students with faculty, with other students, with experts, with novices, anything that creates an intellectually challenging environment to challenge pre-existing beliefs. In the Openness in Education OpenCourseWare topic video, the announcement press conference (I’ve linked to the MIT hosted version) filmed at MIT (4 April 2001), MIT President Charles Vest makes this point quite distinctly in his opening speech, and again in response to questioning. Importantly for me and the work I’m currently involved in, Prof. Vest strongly points to the “deeply ingrained sense of service” and “incredible idealism” within the MIT faculty. This for me encapsulated the ethos of a deep sense of commitment to what education means to illustrious and highly motivated educations at one of the world’s great educational institutions. Prof. Steve Lerman (Chairman of the Faculty) says that selling courses for profit is not why most of the faculty do what they do, and it’s not the mission of the University. A fundamental value is how you create and disseminate human knowledge. Also, the fact that such an idea, and indeed a venture, could come seemingly from the grassroots faculty is extremely encouraging for me personally.

Prof. Hal Abelson (EECS) points out that going through the process of creating OCW actually allows faculty to reflect upon their own teaching practice; what they are doing with their own students. Once the content has been ‘separated’ from the education process you are able to think more deeply about the overall educational experience.

Prof. Vest goes on to say that openness is a successful way for bright people to innovate, as was the case with software – so for education. This would seem to draw in other topics from the Openness in Education course, particularly the Open Source topic.

From the video, intellectual property rights wasn’t as large an issue for the faculty at MIT as had been anticipated. Instead there was more of a concern about quality of product and service to end user.

The MIT initiative celebrated its ten year anniversary in April last year. In those intervening years, MIT through ‘OCW has shared materials from more than 2000 courses with an estimated 100 million individuals worldwide.’ (http://ocw.mit.edu/about/next-decade/ accessed 27 January 2012). Well over a million visits are logged each month on MIT OCW, accessed from 200 countries.

I guess paralleling the MIT OCW, the Open High School of Utah is committed to making available its entire curriculum as Open Courseware, thus providing a freely available high school level education.

The OpenCourseWare Consortium

The OpenCourseWareConsortium is a worldwide community of hundreds of higher education institutions and associated organizations committed to advancing OpenCourseWare and its impact on global education. They serve as a resource for starting and sustaining OCW projects, as a coordinating body for the movement on a global scale, and as a forum for exchange of ideas and future planning. (http://ocwconsortium.org/en/aboutus/abouttheocwc accessed 27 Jaunary 2012).

Individuals, whether they represent Consortium members or not, are welcome to use and modify materials and resources found on this website, and to participate in discussions, webinars, communities of interest, and other Consortium activities. (http://ocwconsortium.org/en/members/howtojoin accessed 27 January 2012).

There is a useful search facility on the site to allow access to courseware from member institutions, with course descriptions and overviews, and links to access and download the full courseware or individual sections. You can also access courses via the categorizations or the catalog.

The Toolkit section of the Consortium’s website has a collection of resources (or a ‘shed full of toolkits’) to help with development of an OCW project. This will prove very useful for me personally in the immediate future.

There is a master list of Consortium members, or you can use the map or list of countries/regions to narrow down your search to a geographical area.

In the UK there are six OpenCourseWare Consortium members:

Institutions of Higher Education   

Organizational Members

This compares with 51 from the USA, four from Canada, one from Australia, 39 from Spain, and 25 from Japan.

Open Content ioe12

Again, these are my notes from the course topic video http://vimeo.com/1796014


 

All this content is attributed to David Wiley.

David starts with the 10 year anniversary of Open Content.

It all started with Free Software (which was covered in the previous topic). It started out with Richard Stallman and the GPL that allowed free (liberty, freedom) reuse of software. Freedom was very important to Richard. In winter 1998 Eric Raymond became involved. He said that ‘free’ was confusing to business, and so developed the concept of Open Source. This focused on why openness and peer review was good.

At the same time David was working and thinking that digital content was really magic because it’s non-rivalrous, because it can be used simultaneously by multiple people without detriment to any. This, he thought, had implications for education. Library books are rivalrous (only one person can use a particular copy at one time), electronic versions of text are not. Digital content could drive down costs and improve access to education. So David went on to work on this concept of making educational content in a way that it could be shared and accessed with others who needed to use and change it for their requirements. That’s when David made the connection between Open Source and doing the same for content. There should be a comparable licence for materials doing the same as the GPL does for software.

David emailed Richard Stallman and Eric Raymond and they asked questions of him about what it would be called (‘free’ or ‘open’) and what it would cover (education, culture, content, stuff ?). So in June 1998 David decided on Open Content. It would cover a whole bunch of stuff. The preliminary licence was called The OpenContent Principles / License (OP/L). There was some success in the uptake of the licence, but very little uptake in education. This required talking to publishers. David was talking to Eric, who was talking to the publisher Tim O’Reilly and the question was asked, “Will you publish something that is openly licensed?”, which lead to a discussion about what publishers might want. Publishers would have to have protection from undercutting, due to costs and work involved. What did authors want? ‘Open Content isn’t really like Open Software.’ Some authors wanted recognition and some wanted to protect the integrity of their work; they were willing to share as long as no changes were made to it.

So in summer 1999, the Open Publication License was published; allowing download, sharing and redistribution. It required attribution to be given to the original author. It came with two options:

Option A) To prevent distribution of substantively modified versions without the explicit permission of the author(s)

[Effectively a derivative works clause]

Option B) To prohibit any publication of this work or derivative works in whole or in part in standard (book) form for commercial purposes unless prior permission is obtained from the copyright holder.

[Effectively the undercutting (no commercial) clause]

This saw much more uptake of this licence.

There were a number of problem. Both licences were abbreviated to OPL. They were both referred to as ‘that open content licence’, so again there was confusion. Also the naming of the Options A) & B) didn’t tell you anything about the content of the option. Additionally a link at the bottom of a page of content that took you off to the Open Content License page didn’t tell you if either of the Options had been implemented for the work or not.

This was a “good idea, but poorly executed”.

Then along came Larry Lessig (and the group that he worked with) and in December 2002 Creative Commons License 1.0 was born. The options were specifically names, e.g. non commercial, no derivatives, etc. and there wasn’t just one licence but each combination of options created a separate licence, with descriptive names, e.g. CC By, CC By-NC-ND.

There was still a button problem, because it didn’t make clear which licence it was. This was fixed later in time.

In the CC 2.0 version, attribution (By) became mandatory.

At the end of the video, David asks “So where are we now, 10 years on?”, and goes on to give a run down of examples from major sites and online services where there are hundreds of thousands of individual content elements made available under Open Content Licences.

In Education, UNESCO convenes a meeting and discusses Open Educational Resources. There’s the Open Courseware Consortium. There are hundreds of university level textbooks openly available. And the Cape Town Open Education Declaration.

And looking forward, “Where are we going?”

There are still problems. Licence compatibility; ‘which material from one licence can be mixed with material from which other licences’. Without the Public Domain there is 28% compatibility of CC Licences. (Refer to the card game from the Open Licensing course topic). David states that whichever Copy Left licence you pick, you can’t mix it with the majority of other available Copy Left licences.

Also there is some confusion/concern over the noncommercial clause. At the time 76% of Flickr content licenced as CC contained a noncommercial (NC) clause.

CC+ and CC0 will become more important.

David then goes on to outline a couple of areas of personal involvement for him.

  • Flatworld Knowledge textbooks is a new publishing models.
  • Open High School of Utah, which is a new free online schooling model. Interestingly the model allows for an iterative cyclic correction of the curriculum.

Drawing on the other course topic reading(s):

“Open content” … is content that is licensed in a manner that provides users with the right to make more kinds of uses than those normally permitted under the law – at no cost to the user.

The primary permissions or usage rights open content is concerned with are expressed in the “4Rs Framework:”

  1. Reuse – the right to reuse the content in its unaltered / verbatim form (e.g., make a backup copy of the content)
  2. Revise – the right to adapt, adjust, modify, or alter the content itself (e.g., translate the content into another language)
  3. Remix – the right to combine the original or revised content with other content to create something new (e.g., incorporate the content into a mashup)
  4. Redistribute – the right to share copies of the original content, your revisions, or your remixes with others (e.g., give a copy of the content to a friend)

Openness in Education ioe12 Sharing Community Badge

Part of the criteria for the OpenEd Assessment Designer Apprentice Level Badge is to design a badge and for other participants of the course to work towards that badge. I felt that the earlier in the course that I created the criteria for my badge, the easier it would be for others to meet the set criteria to achieve the badge as they worked through the course.

I will be providing my own Bookmarks as an example in the near future, but hopefully the criteria below is clear. I would welcome your comments.

Here is the discription:
Badge Type: ioe12 Sharing Community Badge
Assessment Type: Peer
Badge Issuer: Peer
Badge Level: Novice Level

Description:

Either

Or

  • Share 25 relevant Links as a blog post

Criteria for the 25 Bookmarks/Links:

  • One of these Bookmarks/Links must relate to each of the course topics (12 in total) – [Amendment: (thanks to mathplourde) use appropriate tags or description to assign to specific course topic]
  • One Bookmark/Link must be to a blog post of another course participant which they have posted as part of the course
  • Two Bookmarks/Links must be to relevant videos
  • One Bookmark/Link must be to a relevant peer reviewed article

Justification:
To follow an Open Practice ethos is to make available your work to the wider community. One element of this is to share your materials to the community, so that others may easily identify useful and relevant materials. In this collective way, a social filtering of materials can occur.

Open Source ioe12 Part 2

Notes taken from Cory Doctorow’s ‘The coming war of general computation video‘ for ioe12 course.

Something more important – General Purpose Computers

DRM 0.96

  • Physical defects to the discs
  • Or other physical things that the software could check for
    • Dongles
    • Hidden Sectors
    • Large Manuals
    • etc.
  • These failed because
    • commercially unpopular
      • reduce usefulness of software to legitimate buyers
        • couldn’t back up software
        • lost ports to dongles
        • forced to transport large manuals
    • they didn’t stop pirates
      • trivial to by-pass authentication
      • ‘experts’ would reverse engineer & crack the software and this version would become widespread

[Video time 7m25sec]

By 1996 there was a ‘solution’

  • WIPO Copyright Treaty passed by the UN World Intellectual Property Organization
    • Laws to prevent use of cracking programme extraction and storage of any information retrieved
      • No layers required to enforce

but this made unrealistic demands on reality, for example you couldn’t look inside your computer while it was running programmes.

[Video time 9m20sec ish]

Cory says that 2011 is the hardest time it will ever be to copy things.

[Video time 13m20sec]

Special purpose technologies are complex & you can remove features from them without doing fundamental disfiguring violence to their underlying utility.

Generally this works

But null & void for general purpose computer & general purpose network, the PC & the internet.

There is a superficial resemblance to achieving regulatory goals.

  • e.g. remove bit torrent from the internet because it enables copyright infringement
  • all it takes to make legitimate material disappear from the internet is to say that it infringes copyright
    • fails to attain the actual regulatory goal – it doesn’t stop people from violating copyright

But it does satisfy the:
“Something must be done, I am doing something, something has been done.”

Thus any failures that occur can be blamed on the regulations not going far enough. Rather than the idea that it was flawed from the outset.

Now we get specialised computers that run specific programmes to e.g. stream audio, play games, etc. but can’t run other programmes that might undermine company profits.

This is the ‘Computers as Appliances’ approach

An appliance isn’t a stripped down computer, it is a fully functioning computer with ‘spyware’ out of the box to prevent ‘misuse’.

DRM always converges on Malware. Companies & governments can run software as surveillance to prevent activity, e.g. ‘brick’ a product that has been tampered with.

On the network side, attempts to make a network that can’t be used for copyright infringement always converges with the surveillance & control measures used by oppressive/repressive governments. Refer to SOPA.

Cory sees this as a century long conflict, and copyright is just the first part of this.

“Can’t you just make a general purpose computer that runs all the programmes except the ones that scare and anger us?”

“Can’t you just make an internet that transmits any message over any protocol between any two points, unless it upsets us?”

[Video time 22m]

“Copyright isn’t important to pretty much everyone.”
Copyright is trivial.

Freedom in the future will require us to monitor our devices and set meaningful policy on them; to examine and terminate the processes that run on them, to maintain them as honest servants to our will and not as traitors and spies working for criminals, thugs and control freaks.

We have to win the copyright battle to allow us to move forward. There are organisations that help with this, supporting open and free systems.

Open Source ioe12 Part 1

I watched the Revolution OS documentary and found it very interesting. Obviously, it had a particular perspective on events and this has to be taken into account when viewing, but there was a lot of useful material that I personally found useful and would to take away with me after watching. I did get a much better idea of events and the main players involved in the Free Software and Open Source movements. It was also interesting to see the different take on things that people from the two sectors have (or had) and a certain level of potentially underlying animosity.

I now have more of an understanding of ‘Copyleft‘ as being a term used for the distribution of software that allows the copying and redistribution under a specific licence. [And the point is made in the video that answered a quandary I had personally about whether to distribute my work under a CC licence other than CC0 or Public Domain.] If something is made Public Domain then anyone can make a small change and can then copyright it; precisely what the Free Software Foundation didn’t want to happen.

The points that I did find very useful from this video were:

  • The significance of Richard Stallman developing the GNU General Public Licence (GPL)
  • The discussion that brought about the term Open Source
  • The authoring by Bruce Perens of the Open Source Definition

Bruce Perens derived a definition for the Debian free software (a Linux distribution). He then relabelled this to become the Open Source Definition. In the video he explains the nine rights in the Definition as:

  1. Free (as in Liberty) Redistribution
  2. Source Code Available
  3. Derived Works Permitted (for redistribution)
  4. Integrity of the Author’s Source Code – Author can sort of maintain their honour – if you make a change you might have to change the name of the programme or mark out the change very clearly
  5. No Discrimination Against Persons or Groups – can’t present someone or a group that has ideologically differing opinions to your own from using the software
  6. No Discrimination Against Fields of Endeavor – usable in a business as well as in a school
  7. Distribution of License – give license to someone else who gives it to someone else
  8. License Must Not be Specific to a Product – if distribute on a RedHat system then the license can’t say ‘don’t distribute on a SUSE or Debian system’
  9. License Must Not Contaminate Other Software – distribute on CD with other software and you can’t stipulate that ‘other software must be free or you can’t distribute my software in there’

Refer to this section of the video.

The full Definition is provided in one of the other readings from the course.

The GPL did allow business and profit to be made by providing a service or support to the ‘Free’ or ‘Open Source’ software. With proprietary software the support is a monopoly which arguably can lead to a poorer service. Cygnus under Michael Tiemann became the first company to support free software.

Linux took off at the same time as the Web because of Apache, the killer Linux app. It was more reliable and more flexible than alternative products, and usefully for Internet Service Providers (ISPs) it allowed multiple sites to be run from once Apache installation.

As a bit of an aside:

I personally remember starting using Netscape during the infancy of the Web on a Unix box in about ‘93-’94. I also remember the problems Netscape began to have with market share as Internet Explorer began being bundled free with the Windows operating system. So it was interesting to see what the influences where on the Netscape executives, including Eric Raymond’s ‘The Cathedral and the Bazaar’, which prompted them to go down the Open Source project route.

I also remember the SUN Spark Stations that were bought in Electronic and Electrical Engineering when I was researching back in about 1992, and how expensive they were compared to the PC 486s of the time, so what Larry Augustin of VA Linux had to say on that matter certainly had resonance.

I mention ‘The Cathedral and the Bazaar’ written by Eric Raymond. As it too makes up a part of the course readings, I have subsequently read it, and it was an interesting read. It deals with Eric’s development of an Open Source email programme called ‘Fetchmail’, and he uses this experience to explain the parallels with Linux Open Source development. One point in this explanation I found particularly interesting was that it seems important to know when to use the ideas and work of others. Also, be extremely reverential in your praise of these other parties and people with believe you actually did much more of the work yourself, if not all the work. You also have to have a kind of charisma that will encourage other people to follow your lead.

The basic breakdown of the Cathedral and Bazaar concept is as follows. The general approach to software production is the use of a set number of programmers who develop the code and debug using a ‘closed (source)’ approach before releasing a version. The debugging takes a long time and bugs are seen as deep level problems. However, because of this process, the software is relatively bug free on release. This is the ‘Cathedral’ approach as Raymond terms it. The opposite of this is the ‘Bazaar’ approach. Here, the source code is made public and anyone can contribute to the development. This enables a very intense peer-review process to take place. The iteration process is very rapid, in the case of Linux multiple version point updates were made in a day. By using this process debugging solutions are announced quickly, thus alleviating duplication of tasks and enabling those involved in debugging to rapidly stop having to work on that task once a solution is found.

It was previously hypothesized by Brooks’ Law that the more programmers that are thrown at a late software programming task the later the project becomes. The ‘Law’ suggests that the bugs interfacing code developed by multiple programmers increases as the square of the number of programmers. If this were the case, however, Linux would never have been produced.

Open Licensing #ioe12 – Post2

I have created notes from the Larry Lessig video for this section of the course. And I’ve written one reflective piece in response. However, I’ve looked back over the course requirement for Badges and am now wondering whether my meandering approach would meet the criteria, even though my own learning is benefiting. I think I’ll go through the content for each section and write a brief blog post from that, I can then look at things in more detail afterwards.

I’d hear previously of the ‘Remix Card Game’, I think it had been used at a conference and I read about it from there. I hadn’t really tried it out myself until I clicked on the link in the Open Licensing course materials, and I’m impressed with how good it actually is. I’ve created a game (not online but for in class use) in the past to inform people about tagging, so I know how useful this game based approach can be. I’m going to find the Remix Card Game very useful when explaining about Creative Commons License use with mixed media.

From it’s inception, the period of copyright has been for a limited time span. In this way, the creator or author of the works was able to capitalise on her/his interllectual property for a limited time with state protection. Initially, this protected period was quite short. The works would then move into the Public Domain for the public good. In this way, the works can be built upon by others for the furtherment of knowledge. This is, for example, a fundamental concept for the advancement of scientific discovery. Isaac Newton said “If I have seen farther it is by standing on the shoulders of giants.”

Progressively, this period of copyright has been extended. In the US, Congress has periodically extended the length (outlined in this Larry Lessig interview), it now last for 70 years after the creator’s death. (In the UK a recent ruling has increased the period of copyright on music recordings from 50 to 70 years after date of creation.) In effect, Congress is granting a perpetual copyright, which some have challenged as being unconstitutional, but the courts have said that each of these changes is only for a finite period and that is constitutional. Others have argued that the falling of works into Public Domain following the copyright period amounts to confiscation, and that copyright should be perpetual and infinite, so that the creator can receive revenue. However, Larry Lessig dispels such arguments in his wiki on the subject.

The concept of Public Domain isn’t as straight forward as one might hope, because there is much work whose status isn’t determined. Copyright holders can’t be traced, or it is unclear if the work is actually Public Domain. These are termed ‘Orphan Works’.  And without a lot of effort taking place to resolve it this unsatisfactory situation looks destined to continue. So, rather than perpetual Copyright, we have perpetual Uncertainty.

The uncertainty related to the use of works by others is encapsulated very well in the ‘Bound by Law’ comic book that explains the dilemmas faced by documentary filmmakers, where the potential costs of using the works can be crippling, and prevents a fuller explanation or reflection of cultural values from being created.

So the main crux of the argument hinges on the period of protection that Copyright should offer, and what is Public Good. I have my own opinions on this, and that is why I’ve gone down the Creative Commons Licensing road for my own works. I feel that they offer enough protect for the works, and allow re-use and development to take place in a way that will allow greater and faster development of human knowledge.

The papers by Rufus Pollock make interesting reading, and resonate with my own thinking.

As Pollock explains, once knowledge is created then sharing it is non-rivalrous, it is not diminished if multiple people use it at the same time. For the benefit of society or humankind, once a piece of knowledge exists then the greatest value to be derived from it is to distribute it at cost (which could be zero or very close to it). However, the initial cost of production can be very costly, and this has to be paid for in some way.

Pollock suggests that there are four (non-exclusive) options for creating this ‘first copy’.

1. Up-front funding either by the state or by other entities – such as charities – followed by free (or marginal cost) distribution, e.g. BBC funding model.

2. Donations (spare time) or self-financing with free distribution e.g. Wikipedia, blogs and many open source projects.

3. The grant of monopoly rights in relation to the copying or use of the knowledge in the form of intellectual property such as copyright and patents.

4. Using imperfections of the market to obtain profit from being the creator of knowledge but without using monopoly rights. Such methods include secrecy, first-mover advantages, marketing and the sale of complementary goods that are rivals but for which an advantage is conferred by the production of the original knowledge.

Pollock goes on to put forward an interesting argument (developed from examining peer-to-peer illegal activity) about the added value derived from making works available via Public Domain and compensating artists for loss of revenue in other ways. Several countries are already considering or using levies elsewhere in the chain to achieve this; taxing broadband provision, or blank recording media. Additionally, the majority of ‘historical’ recording under copyright aren’t currently commercially available. This adds further to the Public Domain argument for increased value and greater creative potential from reuse.

In the second of the Pollock papers from the course reading, by developing an equation and using empirical data an estimate of optimal copyright duration is derived and the value comes out to be 15 years. This is much shorter than most countries set copyright to be. The argument therefore follows that policymakers could enhance social benefit by setting copyright to this much reduced value.