OA: beyond technocracy?
To build on the thread started by Joanna Zylinska, I see two areas where it might be fruitful to ask whether we are jumping too soon to technocratic answers:
QUALITY AND OPEN ACCESS BOOKS
The question about quality of open access resources, books and journals both, has raised some good questions about evaluating scholarly work, as a few commenters have raised in interesting posts on this list. My question is, in the rush to establish the quality of open access, do we risk imposing technocratic solutions that favor measurement over actual quality? I would argue that we do more than enough of this already in scholarship, as we have seen with the power of impact factor and the Research Excellence Framework approach, which I see as problematic (others may disagree, and I would certainly acknowledge that there are both benefits and downsides to this kind of approach).
For example, imposing a peer review rather than editorial approach may or may not be beneficial for scholarly monographs in areas where this does not exist. In some cases, the invitation to write a book or book chapter comes from a leading scholar in the area in question, who already knows that the author has authoritative knowledge in a particular area; hence the invitation to write. In this case, it may be that what is needed for a high quality book is editorial support, not peer review at all. In other cases, for a monograph-length work, unless a discipline has a well-established tradition of thorough peer review of such works, it is a little hard to imagine scholars with enough time on their hands to do a thorough job of peer review.
In other words, I don’t think we have enough evidence to employ this kind of “evidence” as an indicator of quality for scholarly work or publisher. Rushing to impose particular procedures in order to speed up assessing quality of open access books may actually detract from quality. By “rushing”, I mean processes that take something less than decades of research, including qualitative research, and deep thought by many scholars.
CREATIVE COMMONS LICENSES
First, let me say that I am a huge fan of CC, use the licenses regularly, contribute to campaigns and encourage others to use the licenses. However, in the long term it is my hope that CC will help us to develop new norms of sharing that will make the licenses per se, if not irrelevant, at least less important. For example, one reason CC is important now is because we have automatic copyright; perhaps someday this will change. In the meantime – I have thought about this a fair bit – specific CC licenses do not necessarily do or say quite what people think they want to say by using the licenses. Some examples relevant to scholarship:
CC-BY and text-mining
1. note that a creator is perfectly free to create a locked-down work that is not technically suited to text mining at all (such as a PDF), and use a CC-BY license. If we want people to quit relying on PDFs, it would be better to say, “let’s quit using PDFs, here is why”, not: use CC-BY (which will result in many people putting CC-BY on their locked-down PDFs).
2. The “BY” in CC-BY means attribution. Large-scale text mining of many documents, data, etc., to create new works makes meaningful attribution extremely difficult if not impossible. So technically if what we really want is text mining, it is the “BY” that should be discouraged.
3. I question whether CC-BY is needed for text mining at all. How google and other search engines work is by crawling websites (text mining) and creating derivatives to deliver results to use. If CC-BY were necessary for text mining, they’d have to stop this. I would argue that anything freely available on the web with nothing in the metadata to say no robots allowed, is available for text mining.
Derivatives and text-mining
Similar to CC-BY, some think that allowing derivatives is necessary for text mining. I argue that this is not necessary at all, for the reasons noted above. Something else to think about is that authors might very well be completely happy with text mining, but not want different types of derivatives of their work (such as creating a new article that changes the wording around a bit, hence changing the meaning).
My two bits for today,
What aspects should funders take into account when developing funding schemes for OA books?
It seems to me that Frantsvåg is suggest what we are currently calling a
book would be quite different from tomorrow’s books. Much of what it is
currently in books is also on the web, so there is a lot of waste in the
industry, the bookshops and the libraries. So, shall we worry more on
online collaborative production of new things (flexible, hyperlinked,
customisable, multimedia, updateable, but unprintable in its native
form) rather than old-fashioned e-books?
there are a lot of questions here, and prophesizing about the future is easy, getting it right may be much more difficult.
Yes, I believe that in the future we will have a number of different products where we today have the book, e- or p-. But I think we need to move stepwise, and that we now should concentrate on creating viable e-monographs that are OA. They will probably look very much like traditional books, at least for the near future. But if they are created as e-books, not as e-versions of p-books, this could liberate the form from what is possible to do in paper, this will make it possible for the creative ones to create new forms. At this stage, I would like to see this as positive side effects, not something we should target.
If we try to create too big changes at one time, nothing will happen. Small changes, and we are on our way to something – nobody knows what, but if the change is for the better, we’d better perform it.
So, for now: OA e-monographs will be what I’m looking for.
I am not sure Rafael is prophesizing the future, it seems more like he
is commenting on the present. “The future is now” as has been quoted
often enough with this kind of thing, so much so it is hardly worth
noting the source.
Publishing has always been collaborative but it has just been hidden by
view. Single authorship, not to be confused with monograph as a single
subject form, is a myth. Unless we want to discuss very woolly
boundaries between single authorship and collaboration we might as well
just save our time and admit the collaborative nature of book production.
Putting it online just makes it easier for the collaboration to occur.
Nothing is lost and we are not turning ourselves into prophets by doing
so. We are also not ‘anarchising’ the book production process by doing
so, or projecting it immediately into an unknown since we can control
the level of collaboration (from strong to weak) using handy tools
(another discussion) and deliver content that looks *just* like the
e-monograph or even the paper monograph. Infact, they are monographs,
just made online instead of in MS Word.
From there things will just evolve. Its not anything radical. However,
ignoring pre-now methods is quite a radical position.
DOAB Digest, Vol 1, Issue 14 – Funding OA monographs
Thank you all very much for the interesting discussions. I have been following the debates on quality assurance and licensing with a great deal of interest as we are currently discussing how to deal with both in a call for funding of OA monographs. We also have the tendency to set a re-use license as the standard for funded pilot projects in this domain, but I have now begun to wonder if this will not in fact be detrimental to our aim, namely to increasing acceptance for open access with high profile researchers in the HSS. In principle, however, it seems to me that funders do have a responsibility for developing the OA infrastructure in a way that allows for text mining and other methods of the digital humanities and it might thus be best to require and establish such standards, provided that the researchers who publish understand their legal situation.
In fact, considering the JISC report on text and data mining that was mentioned by Gary, there are barriers to the full exploitation of texts even if the licenses allow for re-use (but require attribution). I do not think that we will be able to convince HSS scholars to relinquish their rights of attribution, and I wonder if the research assessment and evaluation systems will ever evolve in such a way that personal merit will count less than the “culture of sharing” (as the recent communication and recommendations on open access by the EC calls it), but for now I would be inclined to at least require re-use licenses as a common standard for funded projects.
The question of quality assurance is also tricky, if, as Eelco pointed out, traditional (and renowned) scientific publishers do not necessarily adhere to the highest and most transparent standards of peer review. It would thus not make too much sense to duplicate these processes in an OA monograph world. On the other hand, we have to make sure that the standards are acceptable and known to scholars, or else that they have a chance of being accepted, and it is not yet clear to me what we as funders should request other than that the standards of review at least follow currently established standards for individual disciplines. It would be quite good to have a “seal of approval” for publishers that also renowned publishing houses could apply for and that would serve as a mark of quality for funders, authors, readers and so on.
Dr. Angela Holzer
Deutsche Forschungsgemeinschaft (DFG)
German Research Foundation
-Scientific Library Services and Information Systems-
Angela, you wrote:
> I do not think that we will be able to convince
> HSS scholars to relinquish their rights of
> attribution . . .
> . . . On the other hand, we have to make sure
> that the standards are acceptable and known to
> scholars, or else that they have a chance of
> being accepted . . .
I’m fairly sure that many HSS scholars don’t know
what is in their best interests or those of their
disciplines, and I’m not convinced that the only
reasonable actions are those that the scholars
support. In various places I’ve worked I’ve met
HSS scholars who were entirely opposed to all
digitization and who felt that the work they
produced on state-funded research leave belonged
to themselves and not to the state that funded it.
These people thought they had a perfect right to sell
to a publisher their state-funded writing and
they resisted OA as an interference in that right.
All of which is to say that I don’t think imposing
upon state-funded writers–using some of the stick
rather that all carrot–would be a gross violation
of anyone’s rights.
In response to Angela, I think that as a strong funder of research setting up a funding program for OA monographs, DFG should not be afraid to lead the way towards high quality OA publishing.
As Suber points out in his June newsletter, when large funders adopt strong OA policies, publishers cannot afford to refuse work by the grantees. This would indeed mean requiring re-use licenses (in the case of books CC-BY-NC should be acceptable). I would also argue that these OA books should be deposited in a central repository in an appropriate format (why not both HTML and PDF?).
Regarding quality control DFG might consider a nuanced approach. I think Heather made an important point that we shouldn’t rush into ‘technocratic solutions that favor measurement over actual quality’. But this shouldn’t keep DFG from trying to ensure quality in the works it funds. Malcolm made a strong case for transparency as the best option. DFG could introduce transparency by asking publishers to provide a description of the quality control system. DFG should be able to establish if this system was adequate, if needed with the help from independent scholars in the relevant area. I think an effort of this kind is much needed in many countries in continental Europe, and there are examples of different approaches in Sweden and Austria. My impression is that in Germany there are quite a few presses that are inclined to publish OA books but they are often relatively young and still thinking about trustworthy mechanisms for quality control. DFG’s call for funding could be just what the doctor ordered.
I’ve been following this conversation with great interest. Whenever I’ve had a moment to sit down and even begin to think about writing, someone else has just posted something really important, and done so much more eloquently than I could.
However, as the days of this DOAB discussion are coming to a end I thought I’d write – as an OA publisher and as a builder of library consortia (not sure if that’s a job description, but you’ll see later why I use it).
The issue I want to address is how prescriptive should we be in our quest for the perfect world of open access everything, all with suitable and easy to understand licenses in a much more friendly copyright environment with perfect and transparent quality assurance, publishing services from professional, preferably non-profit (or modest profit) publishers, adequate funding for publishing worldwide and accommodation of all new multimedia types and digital formats. We’re on a journey where we do need to have a sense of direction, but as Heraclitus said, you can’t step in the same river twice. The digital river we are traveling in is flowing very rapidly. There are many rocks along the way, and even the shapes and forms of these rocks are changing as the water and various objects in the stream hit against them. Some pretty nifty manoeuvrings are called for.
So, let me give a few examples of my experiences in white water rafting. In 2008 Bloomsbury Publishing Plc invited me to set up their academic imprint, Bloomsbury Academic. We began by publishing monographs in 2010 on CC NC licences, allowing authors to further restrict the licence if they wished. We put an HTML version on the BA platform, but some authors wanted the PDF posted on the site too. My masters at the time were unlikely to agree to the posting the PDF for fear that it would cannibalise sales of print and ebooks. I’m convinced that had I dug my heels in and insisted on the PDF on the platform the appetite for open access would have waned considerably. In any case, I had to contend with a mix of attitudes throughout the company from hugely supportive, to indifference, to downright hostile. I will always be grateful to Bloomsbury for allowing this experiment to take place, even if for some it was not under ideal conditions. The print and ebooks sell as well (and sometimes better) than closed books and while HTML does not suit everyone – it is at least free to the end user. Lesson learnt – don’t be too rigid about licensing and format – we need more experimentation.
At BA we were amortising our origination costs across the print and ebook versions and this meant that the book prices were as high as ever. At the same time library budgets, for books especially, were shrinking. So, even if in an ‘ideal’ world all publishers adopted the BA business model we would still need to sell the same number of units (or put another way, extract the same amount of revenue from libraries) to support the business of publishing each book in the first place – even if it was available on open access.
I then thought about who actually pays for monographs. It is, of course, the libraries. So, the question arises how to make better use of the funds that already exist in libraries. The answer to me seemed to rest in consortium buying as this generally reduces costs. (I had some experience with consortia, having come up with the business model for EIFL when I worked for the Soros Foundation in the nineties.)
Applying these thoughts to monographs in today’s world led me to a business model that splits apart the paying of origination costs (aka fixed costs or ‘getting to first digital file costs’). How to make these a one off payment through a library consortium paid for from existing library funds in a way that reduced overall costs per book per library, still keeps professional publishing input viable and is open access was the challenge.
And so I am now working on a pilot project with Knowledge Unlatched, a not for profit, Community Interest Company (CIC) which will establish an international library consortium to pay for the origination costs of monographs in the form of a Title Fee – in return for open access. For a description of how the model works see http://www.knowledgeunlatched.org <http://www.knowledgeunlatched.org> Having spent a long time talking to stakeholders around the world it was clear that no single model could please all the people all the time. But there are enough elements in the model to garner both the financial support and the willingness to participate in a pilot that will start in 2013. There is a much greater appetite for experimentation amongst all participants in the scholarly eco-system than there was in 2008 which was not only the start of BA but also the EU backed OAPEN project. Knowledge Unlatched is deliberately transparent and structured in a way that allows for flexibility, experimentation and adaptation that is essential for anyone white water rafting in today’s digital river. Lesson learnt – hold the vision, experiment and hang on tight for the ride.
I agree with Caren and her argument for not being too prescriptive. We need to have buy-in and respect for key guidelines from all stakeholders. From Eelco’s post I would support transparency of the process of quality control with some kind of light touch set of guidelines that enables inclusion of high quality regardless of source. All of the contributions to this discussion have certainly helped me in my thinking as I work through the practicalities of moving Knowledge Unlatched forward. Thank you all. Contact me if you’d like any more information about Knowledge Unlatched (apologies for the plug) and have a great summer.
Dr Frances Pinter
21 Palmer Street
London SW1H 0AD
Quality and open access books
Eelco suggests three options for quality control in an OA environment: force strict peer reviews on all procedures; identify a number of adequate forms of quality control; aim to make peer review procedures transparent.
To me, that ranks them in ascending order of preference. The last option leaves control in hands of the authors (do I want to be associated with a publisher who does that?) and readers (am I willing to read something published under that policy?). I’ve more confidence in the outcomes of that kind of disseminated decision-making than in top-down control. I don’t disagree with Gabriel when he says that many HSS scholars don’t know what is in their best interests or those of their disciplines. But we also have to ask whether there’s any reason to believe that planners, strategists and technocrats know what’s in the best interest of our disciplines. I can’t see much evidence that they do, and the evidence that they don’t is abundant (cf. Heather on the REF). So trusting a messy collective exploration of new possibilities to produce incremental enhancements of the collective culture looks to me a safer course. It will probably make progress frustratingly slow, but it’s less likely to screw things up badly.
Of course, that kind of disseminated process can only work properly if there are no structural obstacles or distortions. So (e.g.) if the publication system we currently have is tied to TA by established commercial interests, OA mandates may be necessary to effect change. Likewise, if the transition to OA is obstructed by (perhaps unrealistic) concerns about its vulnerability to vanity and predatory publishing, then a relatively directive approach to quality control standards may be necessary to establish confidence and get things moving. Lack of transparency is also an obvious problem: people can’t make informed decisions if the information isn’t available. So I’m not wholly resistant to directive intervention; but I think it should be minimalist.
Brands (e.g. JISC, DOAB) that make their endorsement conditional on compliance with a set of principles are obviously well positioned to specify a set of quality control principles. And it would be entirely proper for funding bodies to include a requirement to publish with a OA publisher endorsed by one of those brands (cf. Angela’s “seal of approval”). This level of directive intervention would not carry too much risk of stifling the development of more anarchic, experimental approaches outside their zone of control.
The discussion has made me reflect again on how peer review actually matter to me as a researcher. Here I want to separate two sides of that role.
As a producer of research, peer review matters when I want to get something published. I would like to be prevented from publishing something really bad (I have had that good fortune!). Also, I would like to be helped to publish something that is as good as it can be: and then it’s not quality control that’s important to me, but quality enhancement – I want the reviewers to provide feedback that will help me improve the final product. For this purpose I’d much rather have detailed feedback from a single autonomous editor who’s an expert in the field and really understands how to get the best out of authors’ efforts than perfunctory approval from a couple of referees operating under the strictest principles of double blind reviewing. The usefulness of peer reviewers’ reports varies, obviously, between individual reviewers, but also between journals: presumably some editors prefer reviewers whose reports will contribute to quality enhancement, others are only interested in whether the reviewers will make a reliable qualitative judgement, and some (perhaps) are just going through the motions. Transparency about peer review policy is to be encouraged, and aids to transparency (such as the icon system Caren mentioned) are a good idea: but a peer review *policy* won’t necessarily reveal the peer review *culture*, which is much more important to me as an author.
As a consumer of research, I’m glad in a general way that peer review exists to apply some minimal level of filtering to the production of academic or academic-seeming books. But I know the filtering isn’t particularly rigorous: even the best publishers in my field sometimes put out stuff for which I’d have recommended rejection if I’d been a referee. And I wouldn’t want the filtering to be more rigorous: I also know of work that has struggled to get into print because referees have taken fright at its originality. Because peer review is fallible, and because there is ample scope in my discipline for disagreement about what the right peer review decision would be in any particular case, I would never dream of using the general quality of a publisher’s peer review to judge the quality of a particular publication. My sense of the general quality of a publisher’s peer review does have some influence on my decisions about how to allocate effort in getting hold of published material: some publishers are more likely to reward my efforts than others. But those decisions are influenced much more heavily by my sense of a book’s relevance to my current needs – so I rely on information about its contents from the publisher’s website, reviews, etc. If you need to know about the evidence for Menander Rhetor’s commentary on Demosthenes, you’ll read Heath 2006 – but not because you trust the publisher’s peer review policy, nor even because you trust the author’s expertise in late ancient rhetorical theory: those factors may contribute to raising your spirits, but your decision will actually be driven by the fact that need to know about the evidence for Menander Rhetor’s commentary on Demosthenes and Heath 2006 contains the only substantial treatment of the subject since 1883.
From the consumer’s point of view, then, peer review doesn’t matter much to me in practice, though I like to think that I can take it for granted that there has been some filtering and enhancement going on in the background. From the producer’s point of view, what’s most important to me in peer review does not reliably correlate with what is expressed in formal peer review policies. So fixating on those policies in a sense misses the point, and being prescriptive about them carries some risk of detracting from the pursuit of quality (again I agree with Heather).
I have really enjoyed the discussion to date, and would like to support
some of the more recent statements made by Malcolm and others.
An issue that Caren Milloy raised – which I would like to highlight with
bells and whistles, is that it is really important to both allow and
encourage new publishing enterprises to emerge. We are in a state of
transition, and I really doubt we have yet seen whatever dissemination
practices will eventually dominate – unless, that is, we allow innovation
to be stifled now.
I am really really afraid of having industry defined standards that
‘acceptable’ publishers have to meet. In almost any industry you wish to
look at these standards rapidly become controlled by established vested
interests and used to stifle innovation and entry. So I shudder at the
thought of any body – especially one made up of existing publishers –
defining an industry standard about what a publication is or should be, or
what ‘acceptable’ practices are – be they peer review, dissemination
techniques, or anything else.
But – as in almost any other industry – there is real social benefit in
having assessment agencies providing users with information about the
reliability and quality of the ‘products’, providing they are run
independently from the producers. So I would support proposals for
validation by agencies such as JISC and DOAB, provided that they are
flexible and open to including new initiatives in their assessment
process, they don’t all coordinate on precisely the same set of criteria,
and grant giving bodies resist the temptation to coordinate on the use of
just one. By reducing information asymmetries these agencies can play an
important role in developing our trust and acceptance of new methods and
practices, and allow us to move away from traditional practices more
Peer Review and Quality:
The difficulty we face is that not all research is equally good and so we
fall into some reliance on the ‘name’ of a publisher as a signal of the
quality of the publication. This, of course, leads to a vicious cycle with
the publishers with the best reputation attracting the best submissions,
so establishing a powerful position within the industry, and provides a
huge ‘barrier to entry’ for any new or innovative publisher to overcome.
Accreditation of new entrants by JISC or similar organisations can reduce
this reliance on established practices and facilitate the adoption of new
techniques – providing they recognise the role they are playing in
facilitating change and don’t get manipulated by the publishing industry
But I also feel that any procedural ‘requirement’ for a peer review
process is pretty close to meaningless. Differences in assessment
procedures have been noted by others in this discussion. We all know that
some academic publishers maintain higher standards than others, even if –
procedurally – their peer review process is the same. Similarly, within
single academic presses – the reputation of different disciplines can vary
markedly. The ‘process’ of selection doesn’t guarantee, or even protect,
the quality of the product. So publisher assessment needs to be beyond
something as formulaic as that.
Grant giving bodies:
Grant giving bodies also need to explicitly recognise the important role
they play in facilitating change – and not get trapped into formulaic
responses that can be used to stifle innovation. Requiring that any
publication must come from a specifically defined group of publishers or
‘standards’ would be bad news – especially if acceptance to that select
group is controlled by the publishers themselves.
Similarly, as others have noted, grant giving bodies are in the wonderful
position of being able to force researchers and academics to accept new
practices they may be reluctant to voluntarily adopt – and shouldn’t be
afraid to exercise that power. But to allow innovation they need to be
flexible in their requirements, rather than looking to provide hard and
fast rules. There are many areas where CC-BY licences are the most
socially desirable, and grant bodies may reasonably expect that as the
default licence for research they finance. But there are some areas where
a CC-BY licence may actually damage the quality of the research that can
be undertaken. So – grant givers may want to place CC-BY as a default
expectation, but allow researchers to identify in their proposal what
their dissemination strategy will be and if there are research critical
reasons why CC-BY is not appropriate for some of the research outputs.
Equally the researcher may have valid reasons why dissemination should
occur through a channel not previously recognised by the funding body and
where specific or default requirements are not appropriate. But many of
these issues can be raised by the researcher in the grant application
process – and assessed at that stage. So my suggestions would be to make
the dissemination strategy an explicit part of the research proposal
(provide default expectations rather than hard and fast rules in the
guidelines) and then judge the proposal as a whole when making funding
Dr. Rupert Gatti
Open Book Publishers
See our latest catalogue at
Academic publishers as icecream ve
The ice-cream analogy (http://www.knowledgeunlatched.org/about/business-model/) captures the crucial content/added value distinction splendidly. A real treat – thank you!
What kind of rights are suitable for an OA book?
Your analysis of the different rights a user can be given on an open access book (OAB) is very nicely done and illustrative. I understand your suggestion of a minimalistic definition of OAB (= free to read online) as an attempt to make it inclusive. However, I would like to bring to the discussion two similar situations where an inclusive definition was not the right choice or was not even considered.
The first case is that of learning objects, where the all inclusive definition of learning object as “anything digital that could be used to support (human) learning” was not only useless but even harmful, as it provided nothing to stand up.
The second case is free software. The free software community has made a distinction between what is gratis (free to use), open source (up to sharing source and allowing reuse, even commercial one), and free (share alike). It seems to me that your proposal of “open as free to read online” is basically equivalent to gratis in the free software field. But this was not the definition that pulled the world of free software forward and make it possible to produce Linux, Apache, Firefox, etc.
In conclusion, I am worried by the possibility that by aiming too short we would reach even shorter.
Dr. Rafael Morales. Researcher. IGCAAV @ UDGVirtual, Universidad de Guadalajara. Avenida de la Paz 2453, Colonia Arcos Vallarta, 44130 Guadalajara, Jalisco, México.
Quality and open access books
As director of the OAPEN Foundation and one of the founders of DOAB I’d
like to start by thanking all of you for taking part in this discussion.
I see it as a milestone ‘en route’ to OA books and OA book publishing.
I’d like to carry on with the discussion about quality control for open
First of all, although I agree with Gabriels’ point that OA and review
are separate issues, I’m afraid we can’t treat them separately if we
want to help establish OA book publishing. There is a lot of confusion
about OA among all stakeholders, and OA will only work if at least a
sizeable portion of authors, libraries, research funders and publishers
understand the benefits and want to make it work. The notion of vanity
publishing and the emergence of so-called predatory publishers are
examples of how OA publishing and quality control get tied together.
With that in mind, I think it is important to address the issue of
quality control and find new ways to establish quality in scholarly
books, especially in OA publishing. I’d agree with Heather that OASPA
can play an important role in helping to establish proper OA academic
publishers. But this is big responsibility and a lot of work, and they
will need help from established stakeholders, in fact from all of us.
I’d say that the question of quality control for individual books is
more complicated. When we started with OAPEN in 2008, our approach was
to ask publishers to describe their peer review process. This process
needs to meet certain standards and it is made transparent by publishing
the description on www.oapen.org. In doing so we’ve come across many
examples of traditional, well established academic publishers that did
not conduct strict peer review procedures. Let me give some examples:
– More or less informal types of reviewing, for instance
reviewing conducted in editorial board meetings with or without written
– A preference for editorial involvement rather than external
reviewing, for instance a senior editor collaborating with a first time
author to develop a proper monograph, often over a long period of time,
or a research group spending a lot of time and effort to make sure a
publication is well reviewed by colleagues rather than trying to get the
opinion from one of the very few (and very busy) outside experts.
– Autonomous reviewing, in the case of well respected senior
series editors of important book series, who decide how to review
manuscripts by themselves and who’d see a publisher trying to establish
transparent procedures as unjustified interference.
– Non academic publishing, in the case of authors that are so
well known that they are beyond pre-publication reviewing. These authors
often choose to publish with trade publishers rather than an academic
There is a great diversity among book publishers, perhaps less among the
AAUP presses, but certainly among academic publishers elsewhere. And I
don’t think that the lack of strict peer review procedures means that
these publishers aren’t doing a good job or that their books aren’t
worthwhile scholarly works.
Now in moving to OA book publishing, should we force all publishers
everywhere to adopt the same strict peer review procedures? Or should we
identify a number of adequate forms of quality control and screen OA
publishers on the type of quality control they are conducting? Or should
we primarily aim to make the quality control transparent and expect
publishers to improve their reviewing procedures as they are made
public? Please let me know your thoughts.
I believe you are pointing at something very important, if we are going to make OA monographs work.
The current status of (peer) review of monographs is how journals looked like some decades ago – very varied, as you describe it. I think it will be impossible, in the short run, to impose something akin to the “double-blind peer review” that journals have established as the gold standard, in OA monograph publishing. But some kind of minimum standard, and an absolute requirement that the actual review process is documented/described (either per monograph or per monograph series), should be established.
Vanity publishing is a problem in OA journals (not large in reality, but it’s used for more than its worth by the anti-OA lobby) and also in TA monograph publishing. OA monograph publishing cannot succeed if we can’t manage to keep vanity publishing out of it, or even the suspicion of it.
Jan Erik Frantsvåg
Open Access adviser
The University Library of Tromsø
I have been meaning to delve into the discussion on this list much earlier and have been following it with interest. I am neither a librarian, researcher, publisher or research funder but I work with all groups and manage the projects that we run here at JISC Collections, of which OAPEN-UK is one (http://oapen-uk.jiscebooks.org/)
I would like to respond to some of the comments (although Eelco and Janneke already knows what I think) and relate them to the recent results of our survey of HSS researchers.
1. Open Access – widening access – use – re-use
I agree with Heather and Malcolm here that we need to be careful with regards to being too prescriptive about what open access means. I expect that most of us on this list agree that we would love to see re-use a part of open access definition, but in the current UK HSS scholarly environment and in the current phase we are in – I think this will limit our success and be detrimental towards opening up access which is a key priority. The very fact that almost 80% of the 690 researcher who completed the survey said that their preferred CC licence would be CC BY NC ND is indicative of the nervousness around a move to OA. However if you separate out the NC and ND, the researchers are more concerned about derivatives than commercial use of their work. Over 63% said no to use of CC BY which is what the Research Councils in the UK are mandating for journal articles. If we forged ahead with a definition of open access monographs that mandated re-use, we may alienate the researcher community – the very people we need to get on board. Just one other thing to note – the figures noted above are the same when we analyse the results by those that are were aware of OA and those that were not aware of OA.
2. Peer review
I am by no means an expert on peer review (I am on a learning curve at the moment) but Eelco has been very useful in helping me see the variations in procedures and his email gives a useful example. Although it has been pointed out that peer-review is not just an issue with OA, I agree with Eelco in that in an OA model, it is something that clearly needs to be addressed as there is a perception that OA means no peer review and that quality will therefore be impacted. One of our survey questions asked the researchers to rank what they thought the impact of OA on quality, disseminating etc would be. The results show that they perceive the impact of OA on dissemination to be positive but that the impact on quality and reputation and reward was neither negative nor positive. Now in the current traditional model, peer-review is extremely important – when we asked the researchers who had published a monograph since 2000 what were the reasons they picked their last publisher, the fact that they trust in the publishers quality assurance process was deemed the second most important reason (the first was that they are good at disseminating to the audience required). Peer-review is therefore a critical factor in the decision making process and any negative perceptions will impact on a move to OA.
But again, being prescriptive could exclude some good new OA publishers therefore the system needs to be open enough to account for new methodologies such as open peer review. I would however think it useful and that it would help make really visible to researchers if there was some sort of icon system like Creative Commons use for their different flavours of licensing. This could be quite flexible of the various methods used, but by having an icon there – would enable researchers to see that a. it was part of an agreed peer review classification system and b. link directly to an explanation of that peer review method. This would really help new OA publishers that are trying to establish their brand which as we know is closely linked to quality. It would also encourage publishers to adopt into the system and develop their peer review processes. It’s all about being transparent.
3. An independent organisation should audit and review publishers against set criteria
In an OA environment there is a greater emphasis on public accountability and transparency due to openness of the funding arrangements – stakeholders care more now than they did when it was all behind doors and was the problem of the library to manage subscriptions etc. It would be a mistake to follow the journals market which is now having major challenges in this area – especially with regards to transparency and hybrid journals. What we should be championing from the start is a clear and public way of reviewing publishers (new ones included) and their practices before they are accepted into the DOAB and this could be, as Heather suggested, a role for OASPA or perhaps even people like us at JISC Collections. I’m thinking along the lines of COUNTER who are an independent organisation that review and audit publishers and the usage data they provide. Let me explain my thinking.
Here in JISC Collections, we do a number of things before we finalise an agreement with a publisher and make their offering available to libraries in the UK. We use a model licence to ensure that the terms and conditions of use are clear, we check the publishers compliance with COUNTER, OpenURL, accessibility etc, and we provide this information to our libraries through our catalogue alongside the pricing model and the licence. Libraries then can subscribe safe in the knowledge that it’s a JISC Collections agreement – that we have negotiated the best possible terms and pricing etc. They trust in the JISC Collections brand.
If we are to support new open access monograph publishers and help them become established to foster healthy competition with the big brands – then these small and new OA publishers need something against which to prove they are worth being considered as a viable option for researchers. We need to help them be trusted by the academic community – especially as we know trust in QA is a critical factor.
So I think that we should be creating an agreed set of criteria (which can be updated as we learn more) against which publishers should be reviewed before they enter DOAB. This criteria could include:
– peer review process
– preservation and archiving policies in place
– that they make clear and transparent how revenue from author fees (as one example) is used and that this is reported on annually alongside a revenue generated report
– metadata requirements
– licensing policy for the whole and parts of the work
As Eelco said – this is a lot of work but I think it will be necessary to tackle the negative perceptions associated with OA and also will help with transparency.
Well, that’s enough from me!
Head of Projects
Peer Review Policies. In Europe there is a debate over the use of a Peer Review Policy for scientific publications (not just OA). As an example you can see the proposal of the European Science Foundation (http://www.esf.org/activities/mo-fora/peer-review.html).
Università degli Studi di Perugia
Dipartimento di Scienze Storiche
I agree with Caren that ‘If we forged ahead with a definition of open
access monographs that mandated re-use, we may alienate the researcher
community – the very people we need to get on board.’ In my role as part
of an OA book publisher (Open Humanities Press), another question I’m
also concerned with at the moment is, are there other communities we
may shut ourselves off from if we don’t?
I’m not just thinking of those in the free and open source software and
open education movements and so on I took Adam as possibly nudging us
toward (although I do wonder whether the OA movement doesn’t have
something to gain from being more mutually aligned with such communities
– strength in numbers and all that). I’m also thinking of the way
there’s been a recent shift in OA initiatives and funders mandates
toward libre OA and with it CC-BY licenses that allow such re-use. The
new policy announced by RCUK that Rupert mentioned in his post is one
instance of this turn; Peter Suber identifies a good number of others in
his SPARC Open Access Newsletter of June, 2012
To a large extent this turn toward libre OA can be seen as being
motivated by a concern not just for open access to the research, but
open access to the data to, including the right to mine texts and data.
And as a March 2012 JISC report pointed out, data mining can be blocked
by permission barriers:
‘Current UK copyright restrictions…mean that most text mining in UKFHE
is based on Open Access documents or bespoke arrangements. This means
that the availability of material for text mining is limited….
Even where text mining is allowed within publisher contracts, licensing
terms that require the full attribution of derivative works developed in
the text mining process can effectively prevent text mining usage. For
example, the Open Access publisher BioMed has such a licence, allowing
text mining and the production of derivative works, provided the
relevant attribution is made. However, where text mining is used to
identify new knowledge derived from cross-article analysis of patterns,
it is effectively impossible to identify all relevant attributions that
contributed to the new derived knowledge. This therefore means that such
text mining cannot be undertaken….’
Of course this shift is focused for the most part on journal articles
rather than books. But how long is this likely to remain the case?
(Certainly, all the government and funding agency events on issues
relating to digital media and the internet I go to these days appear to
be dreaming of some kind of seamless convergence between open access,
open data, the internet of things and cloud computing.)
So, I’m wondering, to what extent the publication of OA books in HSS can
afford to remain out of this text and data mining loop, and for how
long? There’s also a part of me wondering to what extent they are going
to be allowed to, and for how long?
Research Professor of Media and Performing Arts
Director of the Centre for Disruptive Media
School of Art and Design, Coventry University
Co-editor of Culture Machine
Co-founder of the Open Humanities Press
does open access mean read-only
I have found the discussion quite illuminating on many issues regarding what open acess does mean. In particular, it seems to me some of the views are clearly dependant on the kind of books under consideration and the user context of use, so I should clarify these aspects before commenting. I am part of the team working on the Latin American Open Textbook Initiative, a project funded by the European ALFA III programme, so I am concerned with open access to textbooks in the Latin American context.
Textbooks are designed to support courses, and courses on the same topic use to change a lot in between regions, so free to read but not adjustable textbooks do not seem to meet our needs. So one of our initiative absolutes is that teachers/institutions should be able to adjust a textbook to their needs (that depend on their sociocultural and economical context, programme and course design).
Having said that, I am wondering whether the definition of open access should be a layered one. A kind of maturity model with gratis at the bottom and attribution-only (plus share-alike) at the top. I think such an approach would attend to some of the critics made to the CC licenses, commented by Gary Hall, as the aim would be to promote achieving the top level.
Dr. Rafael Morales. Researcher. IGCAAV @ UDGVirtual, Universidad de Guadalajara. Avenida de la Paz 2453, Colonia Arcos Vallarta, 44130 Guadalajara, Jalisco, México.
What aspects should funders take into account when developing funding schemes for OA books?
I do believe that you could be right that such conversions could be done by readers, using free software.
But observing what little most writers understand about using Word, which their employers provide them, I am quite confident that most of them (us) would prefer to pay a modest sum to be able to download the version we want with no hassle and no need to understand anything technical. I have no belief whatsoever in the technical insight of the common user, I am afraid. A scepticism based on 30 years of working in the combat zone between users and techies, I may add. 🙂
And yes, I believe such a strategy should be based on small sums in the USD 1-4 range – or something along those lines. You cannot sell such a version for USD 25 when another version is freely available.
This may, of course, change over time. If conversions of complicated materials becomes very easy, this will also lower publishing cost.
Just to add to our reading list – RCUK (the UK research funding agency)
has today announced a new policy for OA publication of all research funded
a. all journal articles to be made cc-by within 6 months of publication
(12 months for humanities and economics)
b. publication payments allowed as part of research grant only for
immediate cc-by publication
This policy specifically for journal articles and conference proceedings,
it does NOT apply to monographs.
RCUK press release, with links to full docs:
Links and Thoughts
Some miscellaneous thoughts on the discussion so far.
First, to introduce myself. I’m a humanities academic (specialising in ancient Greek literature and philosophy). To date I’ve published six academic monographs, and a translation of Aristotle’s Poetics published with Penguin Classics. I have another monograph currently in production, and yet another progressing towards first complete draft. My work on Aristotle means that I make frequent cross-border raids into other Humanities fields (especially philosophy), but also forage further afield in some STEM subjects (especially zoology, psychology, cognitive science), so I have some sense of what goes on outside my own subject area. The total income from my six monographs has been less than my total expenditure on other people’s academic monographs: so I’d benefit from OA financially. On the other hand, I’d lose financially if it extended to the Penguin Classic. On a third hand, OA would be massively beneficial in relieving the constraints that access barriers impose on the conduct of research. It’s true that, at present, if I know I need to read something I can generally get it, even though it might involve considerable time, effort and expense. I also know that I’m fortunate in this respect: I’m an academic employed by a university with a good library is good, and am not far from other good libraries; people who are not in academia, or who are academics in a less well-resourced environment, are not so fortunate. Even for me, there is a problem of how to determine whether I need to read something that I can’t easily get access to: it’s not feasible to invest the same amount of time, effort and expense on preliminary assessment. Since restrictive licensing is making it harder to get sight of material held at other universities, the transition to digital media is making that problem more acute.
From my perspective, the removal of such constraints on the conduct of research is the decisive practical argument for OA (together with the moral argument based on public accessibility). So, though I recognise that OA is likely to have all sorts of other consequences and open up all sorts of new possibilities, those seem to incidental and potentially distracting from the (for me) core issue. In any case, I agree with Jan that we’re not likely to succeed in predicting future opportunities, problems, solutions: so better to let the new consequences and possibilities present themselves.
So, for example, I can see that OA creates interesting possibilities for licensing reuse and the creation of derivatives, but I wouldn’t want the more limited goal of enhancing accessibility to readers tied to that agenda. Likewise, Adam may well be right that there’s an opportunity to critique the ‘single author culture of production’, I wouldn’t want the pressing need for less constrained access to research output to get tied to a culture-change agenda that will be (even!) more difficult to implement than OA (in the basic sense of accessibility to readers). [If I did get side-tracked into that, I’d want to get greater clarity about the distinction between authoring and production, and between production in the sense of getting the authored material to publication and production in the sense of the broader collaborative processes that support authoring. My next single-authored book is the product of a collaborative process, with academic input from series editors, referees, colleagues, students who took a course in which some of the material was developed. The single author is simply a node in a complex of processes that are thoroughly collaborative: for some purposes, the single author is the most efficient and appropriate form for that node to take, in others not.]
Regarding peer review, I agree with Gabriel: neither the transition to digital publication nor the transition to OA *of itself* throws up questions about peer review. Questions about OA and peer review do get thrown up by the *perception* of OA publications as having perhaps not been subject to peer review. This is just a matter of educating potential readers. Alas, this does mean culture change, which isn’t easy to achieve. For that reason, I think the greater culture change needed to get to new models of peer review will be slow (and there will be significant differences between disciplines in what works: e.g. I suspect that open peer-review will be more feasible in a STEM subject with a very large research base than in specialised corner of small Humanities subjects: say, genetics versus late ancient Greek rhetorical theory).
Is it possible to build an OA business model on charges for enhanced format? Possibly. I’ve been known to buy print copies of books that I’ve discovered in OA digital format. When a journal offers me an article in html and pdf, I’ll use the html version for quick preliminary evaluation; if I decide I want to read the article in detail, I’ll download the pdf (the reading experience is better; and if I end up citing the article, my readers will expect references with page numbers). This is true even when the two versions are equally sophisticated in respect of hyperlinking etc: consumers could also be offered a choice between an unenhanced and an enhanced version of the file. So differences in format and added value do matter, and might be worth paying for.
The distinction between research content (we’ve paid for that already) and added value (which people will be willing to pay for, if they are actually valuable) seems to me fundamental.
Quality and Open Access Book
On the question of quality and open access books, some thoughts:
Should we distinguish between open access scholarly monographs and open access books? Books that are not meant to be scholarly monographs can be open access, too. The criteria for quality will be different. In some cases, really different; the criteria for a quality novel, for example. However, there are books that are sources of knowledge and important to scholars, even if they are not scholarly monographs. Reports by government agencies and NGOs, for example, can be book length.
Isn’t one of the key criteria for assessing the quality of a scholarly monograph the reputation of its publisher? It seems to me that one of the issues coming up with “predatory” publishers in open access journals really has more to do with new publishers employing unethical practices such as listing people as being on the Editorial Board without their consent. Based on this experience, I wonder if what we need isn’t so much a statement that a new OA publisher is following certain practices, as a rigorous evaluation of the publisher to ensure that they are following appropriate practices. For example, it is easy to say that your organization practices double-blind peer review. Actually practicing double-blind peer review, and really understanding what the purpose is and whether it is done well, are different matters.
My view is that decisions about whether a new publisher are following appropriate quality-control practices should be decided by senior scholars in the discipline in which the publisher operates, possibly in conjunction with established publishers. OASPA, the Open Access Scholarly Publishers Association, is a great start in this direction and worth supporting.
As others on the list have pointed out, there may be differences in what constitutes appropriate quality control, which may vary by discipline. In some cases, double-blind review may be necessary to establish quality. In other cases, a combination of peer review and expert editorial control may be optimal, particularly if an editor with both scholarly and publishing knowledge is available.
One reason to avoid delineating which quality control mechanisms to use is that this could stifle what I see as needed innovation in this area, such as open approaches to peer review and the more open approaches to writing such as liquid peer review.
Is there any scope to offer the ability for the readers to comment on,
annotate, and review the material?
Links and Thoughts
Thank you for all your thoughts, comments and insights up to now! Now that we are going into the second week of the discussion I would like to draw your attention to some interesting (and provocative) articles that came out last week and that can be related to our discussion on Open Access books:
I would also like to ask people to share their views on quality control for Open Access books, an aspect that has not been touched upon much in the discussion up to now. What kinds of quality control mechanisms are suitable for Open Access books? Is double blind peer review a necessity? What about new forms of peer-2-peer or open peer review? And what about editorial review, is this simply not authoritative enough or is it perhaps a logical starting point for new Open Access presses? In this respect, what counts as an Open Access (book) publisher?
Looking forwards to your thoughts!
Thank you Janneke for the inspiring articles and your invitation to share our views on quality control for Open Access books.
I would like to share my concern about how to describe the quality control mechanisms of book contents, so that this information is visible within the book, for preparing metadata of the book for institutional, subject and multidisciplinary digital repositories.
In journals, the description of the peer-review process is included within the journal with standardized and specific formats. In scientific and academic books, where and how should be described the quality control mechanism of the book?. Is there a specific format and place in the book to inform the evaluation procedure so that it is visible and clear to include this information in the book metadata in digital repositories? Are there good practices or standardized formats to follow as is the case for journals?
Thank you, Dominique
Dra. Dominique Babini
Regarding peer review of articles, the practices across
different disciplines vary widely. In the Arts especially,
there are many journals that do not practice double-blind
peer review, so I don’t think we can simply transport
journals’ practices across to OA books. Rather, the whole
question of peer review across all kinds of output needs
to be revisited.
But do we need to address this problem right now in
relation to OA books? Printed books almost never
disclose the processes of evaluation that led to their
publication, and in the Arts and Social Sciences a
lot of faith is placed in the reputation of the publisher.
This faith is very often misplaced. Moving from paper to
digital publication doesn’t of itself have any connection
to peer review, does it? I’m not sure I agree that the
OA movement of itself throws up questions about peer
review. Rather, those questions ought to be addressed
no matter what the medium of dissemination. OA just
made us notice that.
OA: beyond technocracy?
Thanks for the interesting discussion so far. I’ve enjoyed reading all
I’d like to return to one of the original questions posed by Janneke
“What is an Open Access book?” – but my answer will also touch on the
discussion as to whether OA is just about reading texts or whether it
can also refer to the process of writing/rewriting them. This will also
touch on issues of quality that the discussion has turned to this week.
I’m a media theorist: my work is situated at the intersections of
philosophy, media practice and cultural theory. And so for me OA is
first and foremost an exciting intellectual opportunity for doing
something conceptually — as well as politically — significant within
the realm of traditional institutional practices (practices of which I’m
critical but of which I’m also very much part). By this I mean our
educational system; the ideas of ‘the university’, ‘the student’ and
‘the scholar’, ‘the author’, the ‘text’ and ‘the book’; the broadly
understood publishing industry in its mainstream and independent guises.
Over the recent years, I’ve been involved in a number of collaborative
OA publishing projects which have allowed me to put some of the ideas
mentioned above to the test in a pragmatic way. I hope the brief
descriptions below can give you an indication of the kinds of
ontological and practical issues entailed in this opening question,
“What is an Open Access book?” , while also raising issues of how to
deal with problems of quality, legitimacy and licensing for OA projects.
(1) Liquid Reader (online teaching)
This is an open access ‘liquid course reader’ I developed, which serves
as a reader for a ten-week graduate theory course, ‘Technology and
Cultural Form: Debates, Models, Dialogues’, taught in a workshop format
to 25 students. This is the second core course on the master’s programme
in Digital Media at our institution, Goldsmiths, University of London.
The course discusses the relationship between various media and
technological forms, their social uses and the cultural context in which
they operate. The ‘liquid reader’ provides a practical case study of a
media form that students can both think about and actively construct.
Using the freely available educational wiki platform, PBworks, a basic
‘skeletal’ course reader was first devised online at the beginning of
the course. It included the key course content, and was subsequently
opened to customisation by students. Throughout the course, students
were involved in adding and editing the reader’s content. They were also
encouraged to experiment with the idea of ‘the reader’ (or, more
broadly, the idea of ‘the book’) through activities such as
collaboratively writing a wiki-style essay (on the topic, ‘Can you use a
Wikipedia model to write and edit books?) and putting together an online
gallery of their photographic works as part of the ‘reader’. The idea
behind this project was to provide an open-access study tool which
facilitates the sharing of knowledge and pedagogic practice. The course
reader is freely available both to Goldsmiths students and to students,
tutors and general users internationally. The project thus promotes
socially significant ‘open scholarship’ and ‘open learning’ under the
open access agenda.
(2) Living Books (academic book series)
Living Books is a series of 20+ edited open access books. It runs on the
same lines as (1) above, but has a slightly narrower remit, in that it’s
concerned specifically with providing a bridge between the sciences and
the humanities in their respective understandings of ‘life’.
(3) Open access journal
I’m also involved in editing the open access journal Culture Machine
(which is in its 13th year now). As well as having an annual themed
issue (we have a new one on attention economy coming out in the next few
weeks), it also offers rolling book reviews — as well as a space called
InterZone, where commissioned topical issues and discussions can be
published all year round.
All of these have been developed with little to no funding — coupled
with lots of goodwill from people from all over the world…
I think there is much need for OA in the arts, humanities, and social
sciences, but, from my experience, for the project to really catch on
widely among the academic body in those disciplines, it has to have
strong intellectual underpinnings: the rationale has to be
philosophically sound; it has to speak about creative alternative modes
of knowledge production; the space for experimentation has to be clearly
articulated and not closed down too early by technicist discussions
about licensing and copyright (even though of course I do recognise the
pragmatic need for the latter, and am very appreciative of the work done
by colleagues in information sciences, libraries, archive collections,
etc. in this regard).
Unless we offer that deeper intellectual justification and don’t
foreclose the debate too early, my fear is that OA will remain a
specialty interest, with most academics in the more critical disciplines
feeling it’s yet another technocratic managerialist solution imposed on
them from above because the funding regimes for the traditional modes of
publication have been found wanting. That would be a shame, as OA can be
much more than that. Indeed, it’s probably one of the most interesting
and potentially radical developments in the academic / publishing world
in the recent decades.
With very best wishes,
Professor Joanna Zylinska
Department of Media and Communications
Goldsmiths, University of London
Re: [DOAB] What aspects should funders take into account when developing funding schemes for OA books?
I think the most important issue is access itself. Speaking as a person working in academia in Asia, access is the biggest problem to doing research and teaching. SImply getting information and keeping up to date with information is hard. Today almost all of the information used, by both students and faculty, is accessed from the internet because of the last of access to hardcopy books. Textbooks are especially expensive and generally only available from the major publishing companies.
My point is that access must be consider first and foremost when developing any funding model for OA books (or any books for that matter) .
I guess it depends where you want to address the question of Open
Access. If it is a matter of just funding current or future single
author works to ensure they have licences that enable open distribution
then I see your point. But this kind of thing starts to look short term
to me. The issue runs deeper and this kind of strategy is likely to last
only as long as the funding does.
If long term strategies are required the business model needs to be
IMHO this all comes down to getting away from the need to resell
artefacts. Finding ways to pay for the production of entirely freely
licensed original materials is critical. As I see it, there are many
ways to do this but they point more towards collaborative production
which can deliver high quality materials quickly and which can (if
licensed well) be used in repositories to build more materials.
It means funding different models of production.
I must confess that I believe that in this context we should keep the discussion to points about OA, not about other aspects. I would like to comment, though, that OA means that the electronic version is the main product, paper a secondary one – as opposed to today where paper is primary and electronic secondary. This shift will make it easier to start developing new forms that only electronic versions will allow. So new products and modes of production will be a result of a shift to OA, in my opinion. For now, let us look at the funding of OA to books.
If I don’t remember too incorrectly, John B Thompson in his “Books in the digital age” describes the traditional monograph as a product with a very uncertain future, as it was already (in 2005) financially insecure. The situation hasn’t become better since then.
A large number of such books are already being produced only because someone at the author side pays major part of the costs directly to the publisher, as most books are not financially viable. For many of these books, discarding the paper version will cut so much direct and indirect costs (including marketing, warehousing, logistics and administration) that the amount made available will enable publishing in OA without extra funding. This must be a first market to be exploited by possible OA publishers.
Another point to make is that OA is about giving free access to a useful version, not to any version – e.g. will paper versions still be sold. It could be a strategy to make an html version OA, while selling pdfs or e-book versions for small sums, to create some kind of income stream. How much income could be generated, I don’t know – but this should be tried out.
If we start here, and gather some experience and let the world evolve, new business opportunities will present themselves. We cannot today predict and solve problems more than a few years beyond today – the world will not be today’s world, the problems and opportunities different from what we imagine today.
Have a nice summer!
Jan Erik Frantsvåg
Open Access adviser
The University Library of Tromsø
phone +47 77 64 49 50
I’d like to comment on one part of Jan’s very interesting
posting, much of which I agree with:
> Another point to make is that OA is about giving free access
> to a useful version, not to any version – e.g. will paper
> versions still be sold. It could be a strategy to make an html
> version OA, while selling pdfs or e-book versions for small
> sums . . .
I’m sure some people will pay for a PDF or e-book version of
something that they can get for free as HTML, but why should
they? There’s nothing difficult about turning HTML into PDF
and the various e-book formats, so if they’re just paying for
this service (which they can do themselves with free software)
the price would have to be almost zero in any case.
That raises as interesting question I’d like to put to this
list: are they ANY digital transformations that are so
inherently difficult and/or expensive that there could
conceivable be a market in providing that transformation
as a service. (You might of course think I’m wrong about
HTML > PDF and HTML > e-book transformations, so I’d like
to hear that objection if it’s your answer).
The transformations are not hard, not, that is, unless you are a
publisher. For many reasons publishers have enormous problems getting
material into these formats. Published content does not usually start
its life as HTML (which is by far the easiest format for facilitating
these conversions). Instead works start life in Word or complex XML, or
other proprietary or complex format (like LateX) which make conversion
So, offering resale of different formats at affordable prices or by
subscription could be an interesting strategy but without wanting to
sound like a parrot of myself but doing it anyway…IMHO Open Access
needs to look at the culture of production if it wants to look at
opportunities like you suggest. For example, it would be better if the
content originated in HTML.
I might be able to guess at the objections to this as HTML is not often
considered as a ‘serious’ content format, especially when it comes to
the production of structured content. However these issues are being
addressed extremely quickly by new browser based authoring environments
typesetting (including TeX emulation) and CSS controls for flowable text
to page conversions (have a look at http://dev.w3.org/csswg/css3-gcpm/).
Does Open Access Mean Read Only?
It is great to be having this discussion forum on really important issues.
Just a little background to help interpret our comments below – we
(Alessandra Tosi and Rupert Gatti) are academics at Cambridge who started
up a non-profit ‘open access’ academic publishers called Open Book
Publishers 3 years ago. We have now published 21 titles in the Humanities
and Social Sciences, which are released as both printed and digital/e-book
editions. All our titles are free to read in their entirety online. To
date almost all (20/21) have been CC BY-ND-NC licenced and one is CC BY
licenced. Of the next three titles to be published, two are CC BY and one
is CC BY-ShareAlike. We would just like to share some of our thoughts and
experiences to the discussion.
First – as has been so well stated by others in this discussion – there is
a huge difference between free-to-read and pay-to-read. Our primary
concern was, and remains, to make high quality research available for
anybody to read. During the month of June alone our 21 free to read
titles received over 25,000 visits, from over 120 countries, with a total
of 480,000 pages viewed. For titles that might reasonably expect to sell
200-300 copies in a traditional publishing format, that is a whole lot of
reads! Clearly, making works free to read has a huge impact on the
dissemination of knowledge.
The three absolutes at OBP have been that the book is free to read, free
to share through one of the CC licences, and is rigorously peer reviewed.
Everything else we do has been to enable us to create an economically
viable business model to support those ‘absolutes’. One important concern
is in attracting really good work to publish, and here the flexibility of
the alternative CC models has been important in convincing some scholars
to try our publishing model at all. Clearly, prior to publication the
authors have complete control over their work and, through their ability
to select dissemination outlets, control over the degree of freedom
awarded readers. In the humanities and social science the extended
development of an argument is what makes a monograph an important research
output. Many authors are extremely reluctant to give ‘carte-blanche’
freedom for anybody to adapt their ‘subtle and sophisticated prose’ and
still keep their name upon the work. Many have experience of the press
cutting and rephrasing statements they have made to imply something very
different to what they originally said, and really don’t want to see that
happening to a book that then carries their own name. So they want to be
able to say yes or no to derivative works, and without the use of CC
BY-ND-NC licences they would not have been prepared to make the work free
to read and share at all.
An additional consideration is that almost all of our authors have wanted
to include images or other content the copyright for which is owned by
others. To obtain permission to reproduce these works we have needed to
assure copyright holders that a CC BY-NC-ND licence is being applied, and
that digital images are reproduced at low resolutions. Some of our
forthcoming titles are in anthropology and issues about the use and re-use
of material and images provided by the communities studied is very
difficult and sensitive. We have not to date set separate licences for
different segments of the books, and this may be a possibility for the
future, but a general requirement for a licence much broader than CC
BY-NC-ND will cause difficulties for the inclusion of some primary and
secondary materials and so restrict what can be published that way.
OBP has no institutional support, so creating a viable business model has
been important to us. To cover the publication costs we need to generate
about GBP£3500 per title in net revenue – with the production of printed
editions (through the use of Print on Demand technology) adding
insignificantly to that cost. For the last seven months we have
successfully balanced operating costs and revenue; with roughly half the
revenue coming in the form of grants raised by authors, and half coming
from the sales of printed and digital editions. To date we have been
reluctant to publish a work CC BY without a significant proportion of
overall publication cost being met pre-publication, worried that CC BY
will reduce our ability to support post-publication revenue streams. We
lack both the evidence to support those concerns, and the financial
strength to risk experimenting to find out! Of the three CC BY titles
published or forthcoming, two have come with significant publication
grants by a research funder. The third has successfully raised funding
through an innovative crowd-source channel – unglue.it – where over 250
individuals contributed to an online campaign to raise USD$7500 to release
the book and associated audio and visual material CC BY. (Of course
experience with these new CC BY titles will also help us assess our
concerns over post-publication revenue – so we can get back to you on
Publication grants through research funding bodies for CC BY publication
have been important for us, and appears to be the dominant business model
presently being advocated by many commentators, for example in the UK’s
recent Finch Report. But for several reasons we feel concerned about
relying entirely on this as the only revenue source or business model.
First, as Gary Hall mentioned in a previous comment, we are concerned
about the institutional control it may allow commercial publishers to
maintain over the academic publishing process; and second because in the
humanities and social sciences many authors just don’t have access to
research grants to support publication in the same way many scientists do.
Some of our authors are retired, others have conducted their research
without recall to external funding, and few have had institutional support
for publishing expenses. At least at present, we feel the availability of
a range of CC licences allows us to develop and experiment with innovative
revenue streams to support our ‘free to read – free to share’ publication
model without relying on a pure ‘author subvention’ model.
So, on the question “Does Open Access Mean Read Only?” we would support
previous comments that if the definition is to extend much beyond free to
read it should not be by very much. And if it is to be extended much
beyond free to read, then we are in need of a new definition to
acknowledge the substantial social benefits free to read extends over pay
Alessandra Tosi and Rupert Gatti
What aspects should funders take into account when developing funding schemes for OA books?
As we are currently devising a funding model to encourage OA book publication (or rather, the development of sustainable business models for OA book publication), I would like to ask participants in this discussion about what they consider to be indispensable requirements for calls in this domain.
This really is still an open question for us. There was a workshop on OA books with a number of stakeholders from the German context that took place at the University of Göttingen in April (http://www.lisa.gerda-henkel-stiftung.de/content.php?nav_id=3725). I would be genuinely interested in your responses as the funding of OA books should be based on standards relevant for an international community!
Dr. Angela Holzer
Deutsche Forschungsgemeinschaft (DFG)
German Research Foundation-Scientific Library Services and Information Systems-
I take it as a given that OA largely means leaving the resale of
artefacts off the table for discussion of sustainable models. Would I be
right in this?
If so, then I think there is a great opportunity to critique how the
single author culture of production has informed the book resale revenue
model and explore how different business models might work with other
For me this means collaborative production models need to be
investigated, and the social, technical and financial mechanisms of
collaborative book production need to explored and documented.
some brief thoughts on Open Access (OA) statements and OA books taken from a paper* presented at the Associazione Italiana Biblioteche 2011 conference in Rome (*“OA publishing: a sustainable model?”, in press). At the beginning the OA movement has focused on journal articles considered the most widespread means of science communication. In the manifesto of Budapest (2001), the first official OA text, there is no mention of books. A note: only 5 of 16 signatories of the manifesto are humanities scholars. Monographs are never explicitly mentioned even in the subsequent founding documents. In the Bethesda Statement (2003) we can find a more extensive definition of publishing (“publishing is a Fundamental Part of the research process, and the costs of publishing are Fundamental to cost of doing research”); continues however to dominate reference to periodicals (publishers are “journal publishers”). In the Berlin Declaration (2003) we find: “publications of original results of scientific research”, an expression that includes books but they are not explicitly mentioned (there is instead a reference to “journals”). We’ll have to wait another few years before finding an explicit mention to OA books.
dr Andrea Capaccioni
LIS assistant professor
Università degli Studi di Perugia
Dipartimento di Scienze storiche
Thanks for the opportunity for discussing some very interesting issues.
I wanted to pip in early with a basic question as, for me, it informs
the rest of the Open Access framework.
Essentially, I was wondering if Open Access is a read-only phenomenon or
if it extends to include “write” access.
In other words, does Open Access mean access to the content only, or
does it also imply access to the source to facilitate modification…
Founder, FLOSS Manuals
Project Manager, Booki
Book Sprint Facilitator
mobile :+ 49 177 4935122
identi.ca : @eset
booki.flossmanuals.net : @adam
Basically, the answer depends on the license through which the contents is made available. In DOAB, all books have a license which enables at the very least the sharing of content. Some licenses also permit modification.
Project Manager Digital Publications
Amsterdam University Press
1016 BG Amsterdam
tel: +31 (0)20 420 0050
Thanks for your response. I agree the license dictates the formal
requirements of access and the use or reuse after access. However, my
question isn’t about what licenses stipulate but what does Open Access
suggest, encourage, or desire?
Is Open Access a read-write idea (reusable source content) or a read-only
If possible I would be interested in thoughts about this without framing
it as a license discussion or mentioning the attributes of specific
In my opinion, OA should enable both reading and writing.
However, making scientific/scholarly knowledge available without barriers is not always possible. Some authors – or other rights owners – feel more comfortable with sharing, while prohibiting changes to the content. Still, this makes more knowledge available than keeping it behind (pay) walls.
Project Manager Digital Publications
Amsterdam University Press
1016 BG Amsterdam
tel: +31 (0)20 420 0050
fax: +31 (0)20 420 3214
I was kind of hoping for that respons. I find that both terms treated
separately (Open and Access) fail at suggesting that the content could be
reusable source material for deriving works. Which is why Im interested in
what the actual values are in the Open Access world.
I agree with you – Open for me is not good enough unless write access is
enabled but I dont know how common that position is in this sector.
Hello everyone and thanks to DOAB for hosting this conversation! Some good comments about whether free to read alone is sufficient for open access, or whether re-use is necessary as a minimum.
About me: I am a librarian, scholar of scholarly communication, open access advocate, and doctoral candidate working to complete my dissertation, Freedom for scholarship in the internet age. Details can be found from the links in my signature.
My perspective is that free to read / free to re-use is not a simple dichotomy, and it is best to consider this question in a more nuanced way. Here is a first attempt at a range of rights worth considering in an open access context.
There is a huge difference between a work that is free to read online and one that is not accessible to all. There are many works that are still inaccessible, or inaccessible for practical purposes. For example, even though I am a scholar from a wealthy country, there are industry reports pertinent to my work that I cannot read because a single report costs more than a thousand dollars, and if any library owns the work, they are forbidden to share via interlibrary loan. It is this kind of inaccessibility which is most clearly not open access; compared to this, free to read online is a huge improvement.
Then there are rights for the reader, such as rights to print, download, save for personal use, and share with colleagues. Beyond free to read online, these are probably the easiest rights for creators to consider granting.
Next is re-use rights for the reader, such as rights to make changes to personal copies (add notes, comments, etc.), and share this version with colleagues.
With respect to changing the content, note that a single work may well contain elements with different re-use rights, for good reasons. For example, if an open access book contains a picture, chart, etc., taken from another work that does not allow re-use, then most of the OA book could allow for re-use, but not that bit. For example, for authors in anthropology, whether a subject is willing to allow a picture to be taken, published, and/or re-used by others, are several different questions.
There are also be technical reasons why making a work re-usable will be variable, particularly with non-textual content. If video clips are inserted into the book, or map-pictures developed from GIS, then it may make the most sense for the book to include the final version but not necessarily the working version which would be necessary to effectively re-purpose the bit. As an example, I write a quarterly series called The Dramatic Growth of Open Access, posted on my blog, and often include charts. For technical reasons, when I upload a chart created from my spreadsheet, I load it as a picture, and Google’s blogger transforms my picture into a more web-friendly version, which looks nice but is not high-resolution so doesn’t necessarily work that well to repurpose. On my blog the rights allow for re-sharing and creation of derivatives, however anyone aiming for quality is advised to contact me for a higher quality version of the chart, for technical reasons. I’m not sure that there is sufficient demand for re-use of these graphs to make it worth my time to clean up the working spreadsheets containing the charts for sharing (the pre-chart version are posted on the web). As sharing our work for re-use evolves, this may become a less common problem – if lots of us want to get at the underlying content to re-work it, then applications allowing us to easily do this may well develop. However, we are not there yet, and it is not at all clear at present that this will happen.
One consequence of the need for different rights for different materials, is that any rigid insistence on an open access book having the same rights applied to every bit of content within the book, will limit the content that can be included in the OA book.
Commercial re-use rights are another possibility, one that may be a better fit for some publishers / business models than others.
One reason for considering a nuanced and inclusive approach to rights is that we are likely to benefit from more free works. If a minimum definition of open access goes beyond free to read online, then it shouldn’t go too far beyond, and there should still be a way of recognizing that free to read online is much better than not free to read at all.
Heather Morrison, MLIS
Doctoral Candidate, Simon Fraser University School of Communication
The Imaginary Journal of Poetic Economics
While I appreciate the concept of “open” that includes all possibilities – remix, reuse, repurpose download print copy annotate, etc. – I’d like to support Heather’s notion that “[t]here is a huge difference between a work that is free to read online and one that is not accessible to all.” Even “free to read” by a worldwide audience is an important step forward.
Director of Library Services, University of the People
Hi Adam and everyone,
Is Open Access a read-only phenomenon or does it include ‘write’ access?
Well, as always it seems, the first thing to say is that Open Access
(OA) isn’t one thing. There are lots of different definitions of open
access. For evidence, just look at the critical response of many of
those associated with the Open Access movement to the recent Finch
report (put together by a group convened by David Willetts, the UK
Science Minister), even though the Finch report is ostensibly supporting
the publication of UK research Open Access. The problem is, the Finch
report is promoting a version of ‘author-pays’ OA that is seen by many
as prioritizing and protecting the interests of the established
publishing industry rather than, say, those of academics, researchers or
the public: hence the criticism.
As both Heather and Irene have stressed, “[t]here is a huge difference
between a work that is free to read online and one that is not
accessible to all” [and e]ven “free to read” by a worldwide audience is
an important step forward.
However, to draw on some recent research Janneke Adema and I have been
conducting on the subject of Open Access books and which we’re hoping to
publish shortly, in many of the more formal OA definitions (including
the important Bethesda and Berlin definitions of Open Access, which are
two of the three component definitions of what has become known as the
Budapest-Bethesda-Berlin (BBB) definition of OA, and both of which
require removing barriers to derivative works), the right to re-use and
re-appropriate a scholarly work is actually acknowledged and
recommended. That said, though, in both theory and practice a difference
between ‘author-side openness’ and ‘reader-side openness’ – or read-only
access and ‘write’ access – does indeed tend to be maintained.
This is especially the case with regard to the publication of books,
where for a variety of reasons (including the licensing, technical and
other issues Heather details) a more narrowly defined vision frequently
holds sway. This is something Janneke can comment on better than I can
I’m sure, but of the books presently available open access, for example,
it seems only a minority have a license where price and permission
barriers to research are removed, with the result that the research is
available under both Gratis (accessible online without a paywall) and
Libre (re-use) conditions. An examination of the licenses used on two of
the largest open access book publishing platforms or directories to
date, the OAPEN (Open Access Publishing in Academic Networks) platform
and the DOAB (Directory of Open Access Books), reveals that on the OAPEN
platform (accessed May 6th 2012) 2 of the 966 books are licensed with a
Creative Commons CC-BY license, and 153 with a CC-BY-NC license (which
still restricts commercial re-use). On the DOAB (accessed May 6th 2012)
5 of the 778 books are licensed with a CC-BY license, 215 with CC-BY-NC.
And that’s just to focus on Creative Commons licenses, which are not
particularly radical politically. It’s rare to find in discussions of OA
the kind of radical critique of Creative Commons from a CopyLeft or
CopyFarLeft perspective that one comes across in certain areas of
critical media studies, software studies and/or discussions of free and
open source software: i.e. that CC’s concern is with reserving rights of
copyright owners rather than granting them to users; that CC is
extremely liberal and individualistic, offering authors a range of
licences from which they can individually choose rather than promoting a
collective agreement, policy or philosophy; and that what CC actually
offers is a reform of IP, not a fundamental critique or challenge to IP.
Research Professor of Media and Performing Arts
Director of the Centre for Disruptive Media
School of Art and Design, Coventry University
Co-editor of Culture Machine
Co-founder of the Open Humanities Press
‘Pirate Radical Philosophy’, Radical Philosophy, 173, May/June, 2012
Next week, from the 9th until the 22nd of July, the DOAB (The Directory of Open Access Books) will be hosting an open, online and moderated discussion on Open Access books. This online discussion with publishers, scholars and the wider Open Access and publishing community will focus on getting an overview of opinions and views that exist on Open Access books, and quality control, peer review and the Open Access publishing of books.
The goal of this discussion will not be to decide on a definition of what constitutes an Open Access book or on what the proper way to publish an OA book is. Although the data gathered through discussions on these topics will be used to formulate recommendations for the DOAB, the idea of this discussion is more to establish a set of ‘lowest common denominators’, requirements for entry that are flexible and can change, following the processual nature of both books and the discourse on Open Access books. This discussion is thus predominantly meant to gain an overview of the views and opinions that exist in the scholarly and publishing community with respect to Open Access books.
To subscribe to the DOAB mailing list, where the discussion will take place, please follow this link: https://listserv.gwdg.de/mailman/listinfo/doab
The discussion will take place over two weeks, but feel free to jump in at any time that is convenient for you. Archives of the discussion will be kept here, and digests of the daily discussion will be posted to the DOABlog.
We will start off the discussion on Monday the 9th with an introductory email. The main questions that will lead the discussion are:
- What is an Open Access book?
- What is an Open Access book publisher?
- What kind of copyright licenses are suitable to use with an OA book?
- What kind of quality control do we need for OA books?
- What kinds of peer review are seen as authorative?
However, please feel free to add questions, or suggest other topics for discussion.
For any further questions, or if you are having problems subscribing to the mailing list, please contact: email@example.com