DOAB Discussion Digest – Thursday July 19th

OA: beyond technocracy?

To build on the thread started by Joanna Zylinska, I see two areas where it might be fruitful to ask whether we are jumping too soon to technocratic answers:

QUALITY AND OPEN ACCESS BOOKS
The question about quality of open access resources, books and journals both, has raised some good questions about evaluating scholarly work, as a few commenters have raised in interesting posts on this list.  My question is, in the rush to establish the quality of open access, do we risk imposing technocratic solutions that favor measurement over actual quality? I would argue that we do more than enough of this already in scholarship, as we have seen with the power of impact factor and the Research Excellence Framework approach, which I see as problematic (others may disagree, and I would certainly acknowledge that there are both benefits and downsides to this kind of approach).

For example, imposing a peer review rather than editorial approach may or may not be beneficial for scholarly monographs in areas where this does not exist. In some cases, the invitation to write a book or book chapter comes from a leading scholar in the area in question, who already knows that the author has authoritative knowledge in a particular area; hence the invitation to write. In this case, it may be that what is needed for a high quality book is editorial support, not peer review at all. In other cases, for a monograph-length work, unless a discipline has a well-established tradition of thorough peer review of such works, it is a little hard to imagine scholars with enough time on their hands to do a thorough job of peer review.

In other words, I don’t think we have enough evidence to employ this kind of “evidence” as an indicator of quality for scholarly work or publisher. Rushing to impose particular procedures in order to speed up assessing quality of open access books may actually detract from quality. By “rushing”, I mean processes that take something less than decades of research, including qualitative research, and deep thought by many scholars.

CREATIVE COMMONS LICENSES
First, let me say that I am a huge fan of CC, use the licenses regularly, contribute to campaigns and encourage others to use the licenses. However, in the long term it is my hope that CC will help us to develop new norms of sharing that will make the licenses per se, if not irrelevant, at least less important. For example, one reason CC is important now is because we have automatic copyright; perhaps someday this will change. In the meantime – I have thought about this a fair bit – specific CC licenses do not necessarily do or say quite what people think they want to say by using the licenses. Some examples relevant to scholarship:

CC-BY and text-mining
1.      note that a creator is perfectly free to create a locked-down work that is not technically suited to text mining at all (such as a PDF), and use a CC-BY license. If we want people to quit relying on PDFs, it would be better to say, “let’s quit using PDFs, here is why”, not: use CC-BY (which will result in many people putting CC-BY on their locked-down PDFs).
2.      The “BY” in CC-BY means attribution. Large-scale text mining of many documents, data, etc., to create new works makes meaningful attribution extremely difficult if not impossible. So technically if what we really want is text mining, it is the “BY” that should be discouraged.
3.      I question whether CC-BY is needed for text mining at all. How google and other search engines work is by crawling websites (text mining) and creating derivatives to deliver results to use. If CC-BY were necessary for text mining, they’d have to stop this. I would argue that anything freely available on the web with nothing in the metadata to say no robots allowed, is available for text mining.

Derivatives and text-mining
Similar to CC-BY, some think that allowing derivatives is necessary for text mining. I argue that this is not necessary at all, for the reasons noted above. Something else to think about is that authors might very well be completely happy with text mining, but not want different types of derivatives of their work (such as creating a new article that changes the wording around a bit, hence changing the meaning).

My two bits for today,

Heather Morrison

What aspects should funders take into account when developing funding schemes for OA books?

It seems to me that Frantsvåg is suggest what we are currently calling a
book would be quite different from tomorrow’s books. Much of what it is
currently in books is also on the web, so there is a lot of waste in the
industry, the bookshops and the libraries. So, shall we worry more on
online collaborative production of new things (flexible, hyperlinked,
customisable, multimedia, updateable, but unprintable in its native
form) rather than old-fashioned e-books?

Regards,
Rafael

Rafael,
there are a lot of questions here, and prophesizing about the future is easy, getting it right may be much more difficult.

Yes, I believe that in the future we will have a number of different products where we today have the book, e- or p-. But I think we need to move stepwise, and that we now should concentrate on creating viable e-monographs that are OA. They will probably look very much like traditional books, at least for the near future. But if they are created as e-books, not as e-versions of p-books, this could liberate the form from what is possible to do in paper, this will make it possible for the creative ones to create new forms. At this stage, I would like to see this as positive side effects, not something we should target.

If we try to create too big changes at one time, nothing will happen. Small changes, and we are on our way to something – nobody knows what, but if the change is for the better, we’d better perform it.

So, for now: OA e-monographs will be what I’m looking for.

Best,
Jan Erik

hi,

I am not sure Rafael is prophesizing the future, it seems more like he
is commenting on the present. “The future is now” as has been quoted
often enough with this kind of thing, so much so it is hardly worth
noting the source.

Publishing has always been collaborative but it has just been hidden by
view. Single authorship, not to be confused with monograph as a single
subject form, is a myth. Unless we want to discuss very woolly
boundaries between single authorship and collaboration we might as well
just save our time and admit the collaborative nature of book production.

Putting it online just makes it easier for the collaboration to occur.
Nothing is lost and we are not turning ourselves into prophets by doing
so. We are also not ‘anarchising’ the book production process by doing
so, or projecting it immediately into an unknown since we can control
the level of collaboration (from strong to weak) using handy tools
(another discussion) and deliver content that looks *just* like the
e-monograph or even the paper monograph. Infact, they are monographs,
just made online instead of in MS Word.

From there things will just evolve. Its not anything radical. However,
ignoring pre-now methods is quite a radical position.

adam

DOAB Digest, Vol 1, Issue 14 – Funding OA monographs

Hi,

Thank you all very much for the interesting discussions. I have been following the debates on quality assurance and licensing with a great deal of interest as we are currently discussing how to deal with both in a call for funding of OA monographs. We also have the tendency to set a re-use license as the standard for funded pilot projects in this domain, but I have now begun to wonder if this will not in fact be detrimental to our aim, namely to increasing acceptance for open access with high profile researchers in the HSS.  In principle, however, it seems to me that funders do have a responsibility for developing the OA infrastructure in a way that allows for text mining and other methods of the digital humanities and it might thus be best to require and establish such standards, provided that the researchers who publish understand their legal situation.
In fact, considering the JISC report on text and data mining that was mentioned by Gary, there are barriers to the full exploitation of texts even if the licenses allow for re-use (but require attribution). I do not think that we will be able to convince HSS scholars to relinquish their rights of attribution, and I wonder if the research assessment and evaluation systems will ever evolve in such a way that personal merit will count less than the “culture of sharing” (as the recent communication and recommendations on open access by the EC calls it), but for now I would be inclined to at least require re-use licenses as a common standard for funded projects.

The question of quality assurance is also tricky, if, as Eelco pointed out, traditional (and renowned) scientific publishers do not necessarily adhere to the highest and most transparent standards of peer review. It would thus not make too much sense to duplicate these processes in an OA monograph world. On the other hand, we have to make sure that the standards are acceptable and known to scholars, or else that they have a chance of being accepted, and it is not yet clear to me what we as funders should request other than that the standards of review at least follow currently established standards for individual disciplines. It would be quite good to have a “seal of approval” for publishers that also renowned publishing houses could apply for and that would serve as a mark of quality for funders, authors, readers and so on.

Best,
Angela

__________________
Dr. Angela Holzer
Deutsche Forschungsgemeinschaft (DFG)
German Research Foundation
-Scientific Library Services and Information Systems-
D-53170 Bonn

 

Angela, you wrote:

> I do not think that we will be able to convince
> HSS scholars to relinquish their rights of
> attribution . . .
> . . .  On the other hand, we have to make sure
> that the standards are acceptable and known to
> scholars, or else that they have a chance of
> being accepted . . .

I’m fairly sure that many HSS scholars don’t know
what is in their best interests or those of their
disciplines, and I’m not convinced that the only
reasonable actions are those that the scholars
support. In various places I’ve worked I’ve met
HSS scholars who were entirely opposed to all
digitization and who felt that the work they
produced on state-funded research leave belonged
to themselves and not to the state that funded it.
These people thought they had a perfect right to sell
to a publisher their state-funded writing and
they resisted OA as an interference in that right.

All of which is to say that I don’t think imposing
upon state-funded writers–using some of the stick
rather that all carrot–would be a gross violation
of anyone’s rights.

Gabriel Egan

In response to Angela, I think that as a strong funder of research setting up a funding program for OA monographs, DFG should not be afraid to lead the way towards high quality OA publishing.
As Suber points out in his June newsletter, when large funders adopt strong OA policies, publishers cannot afford to refuse work by the grantees. This would indeed mean requiring re-use licenses (in the case of books CC-BY-NC should be acceptable). I would also argue that these OA books should be deposited in a central repository in an appropriate format (why not both HTML and PDF?).
Regarding quality control DFG might consider a nuanced approach. I think Heather made an important point that we shouldn’t rush into ‘technocratic solutions that favor measurement over actual quality’. But this shouldn’t keep DFG from trying to ensure quality in the works it funds. Malcolm made a strong case for transparency as the best option. DFG could introduce transparency by asking publishers to provide a description of the quality control system. DFG should be able to establish if this system was adequate, if needed with the help from independent scholars in the relevant area. I think an effort of this kind is much needed in many countries in continental Europe, and there are examples of different approaches in Sweden and Austria. My impression is that in Germany there are quite a few presses that are inclined to publish OA books but they are often relatively young and still thinking about trustworthy mechanisms for quality control. DFG’s call for funding could be just what the doctor ordered.

Eelco Ferwerda
OAPEN Foundation
www.oapen.org
e.ferwerda@oapen.org

OA Books

Hello Everyone,

I’ve been following this conversation with great interest. Whenever I’ve had a moment to sit down and even begin to think about writing, someone else has just posted something really important, and done so much more eloquently than I could.

However, as the days of this DOAB discussion are coming to a end I thought I’d write – as an OA publisher and as a builder of library consortia (not sure if that’s a job description, but you’ll see later why I use it).

The issue I want to address is how prescriptive should we be in our quest for the perfect world of open access everything, all with suitable and easy to understand licenses in a much more friendly copyright environment with perfect and transparent quality assurance, publishing services from professional, preferably non-profit (or modest profit) publishers, adequate funding for publishing worldwide and accommodation of all new multimedia types and digital formats. We’re on a journey where we do need to have a sense of direction, but as Heraclitus said, you can’t step in the same river twice. The digital river we are traveling in is flowing very rapidly. There are many rocks along the way, and even the shapes and forms of these rocks are changing as the water and various objects in the stream hit against them. Some pretty nifty manoeuvrings are called for.

So, let me give a few examples of my experiences in white water rafting. In 2008 Bloomsbury Publishing Plc invited me to set up their academic imprint, Bloomsbury Academic. We began by publishing monographs in 2010 on CC NC licences, allowing authors to further restrict the licence if they wished. We put an HTML version on the BA platform, but some authors wanted the PDF posted on the site too. My masters at the time were unlikely to agree to the posting the PDF for fear that it would cannibalise sales of print and ebooks. I’m convinced that had I dug my heels in and insisted on the PDF on the platform the appetite for open access would have waned considerably. In any case, I had to contend with a mix of attitudes throughout the company from hugely supportive, to indifference, to downright hostile. I will always be grateful to Bloomsbury for allowing this experiment to take place, even if for some it was not under ideal conditions.  The print and ebooks sell as well (and sometimes better) than closed books and while HTML does not suit everyone  – it is at least free to the end user. Lesson learnt – don’t be too rigid about licensing and format – we need more experimentation.

At BA we were amortising our origination costs across the print and ebook versions and this meant that the book prices were as high as ever. At the same time library budgets, for books especially, were shrinking. So, even if in an ‘ideal’ world all publishers adopted the BA business model we would still need to sell the same number of units (or put another way, extract the same amount of revenue from libraries) to support the business of publishing each book in the first place – even if it was available on open access.

I then thought about who actually pays for monographs. It is, of course, the libraries. So, the question arises how to make better use of the funds that already exist in libraries. The answer to me seemed to rest in consortium buying as this generally reduces costs. (I had some experience with consortia, having come up with the business model for EIFL when I worked for the Soros Foundation in the nineties.)

Applying these thoughts to monographs in today’s world led me to a business model that splits apart the paying of origination costs (aka fixed costs or ‘getting to first digital file costs’).  How to make these a one off payment through a library consortium paid for from existing library funds in a way that reduced overall costs per book per library, still keeps professional publishing input viable and is open access was the challenge.

And so I am now working on a pilot project with Knowledge Unlatched, a not for profit, Community Interest Company (CIC) which will establish an international library consortium to pay for the origination costs of monographs in the form of a Title Fee – in return for open access. For a description of how the model works see http://www.knowledgeunlatched.org <http://www.knowledgeunlatched.org>  Having spent a long time talking to stakeholders around the world it was clear that no single model could please all the people all the time. But there are enough elements in the model to garner both the financial support and the willingness to participate in a pilot that will start in 2013. There is a much greater appetite for experimentation amongst all participants in the scholarly eco-system than there was in 2008 which was not only the start of BA but also the EU backed OAPEN project.  Knowledge Unlatched is deliberately transparent and structured in a way that allows for flexibility, experimentation and adaptation that is essential for anyone white water rafting in today’s digital river. Lesson learnt – hold the vision, experiment and hang on tight for the ride.

I agree with Caren and her argument for not being too prescriptive. We need to have buy-in and respect for key guidelines from all stakeholders. From Eelco’s post I would support transparency of the process of quality control with some kind of light touch set of guidelines that enables inclusion of high quality regardless of source. All of the contributions to this discussion have certainly helped me in my thinking as I work through the practicalities of moving Knowledge Unlatched forward. Thank you all. Contact me if you’d like any more information about Knowledge Unlatched (apologies for the plug) and have a great summer.

Best,
Frances
Dr Frances Pinter
Knowledge Unlatched
21 Palmer Street
London SW1H 0AD

Quality and open access books

Eelco suggests three options for quality control in an OA environment: force strict peer reviews on all procedures; identify a number of adequate forms of quality control; aim to make peer review procedures transparent.

To me, that ranks them in ascending order of preference. The last option leaves control in hands of the authors (do I want to be associated with a publisher who does that?) and readers (am I willing to read something published under that policy?). I’ve more confidence in the outcomes of that kind of disseminated decision-making than in top-down control. I don’t disagree with Gabriel when he says that many HSS scholars don’t know what is in their best interests or those of their disciplines. But we also have to ask whether there’s any reason to believe that planners, strategists and technocrats know what’s in the best interest of our disciplines. I can’t see much evidence that they do, and the evidence that they don’t is abundant (cf. Heather on the REF). So trusting a messy collective exploration of new possibilities to produce incremental enhancements of the collective culture looks to me a safer course. It will probably make progress frustratingly slow, but it’s less likely to screw things up badly.

Of course, that kind of disseminated process can only work properly if there are no structural obstacles or distortions. So (e.g.) if the publication system we currently have is tied to TA by established commercial interests, OA mandates may be necessary to effect change. Likewise, if the transition to OA is obstructed by (perhaps unrealistic) concerns about its vulnerability to vanity and predatory publishing, then a relatively directive approach to quality control standards may be necessary to establish confidence and get things moving. Lack of transparency is also an obvious problem: people can’t make informed decisions if the information isn’t available. So I’m not wholly resistant to directive intervention; but I think it should be minimalist.

Brands (e.g. JISC, DOAB) that make their endorsement conditional on compliance with a set of principles are obviously well positioned to specify a set of quality control principles. And it would be entirely proper for funding bodies to include a requirement to publish with a OA publisher endorsed by one of those brands (cf. Angela’s “seal of approval”). This level of directive intervention would not carry too much risk of stifling the development of more anarchic, experimental approaches outside their zone of control.

The discussion has made me reflect again on how peer review actually matter to me as a researcher. Here I want to separate two sides of that role.

As a producer of research, peer review matters when I want to get something published. I would like to be prevented from publishing something really bad (I have had that good fortune!). Also, I would like to be helped to publish something that is as good as it can be: and then it’s not quality control that’s important to me, but quality enhancement – I want the reviewers to provide feedback that will help me improve the final product. For this purpose I’d much rather have detailed feedback from a single autonomous editor who’s an expert in the field and really understands how to get the best out of authors’ efforts than perfunctory approval from a couple of referees operating under the strictest principles of double blind reviewing. The usefulness of peer reviewers’ reports varies, obviously, between individual reviewers, but also between journals: presumably some editors prefer reviewers whose reports will contribute to quality enhancement, others are only interested in whether the reviewers will make a reliable qualitative judgement, and some (perhaps) are just going through the motions. Transparency about peer review policy is to be encouraged, and aids to transparency (such as the icon system Caren mentioned) are a good idea: but a peer review *policy* won’t necessarily reveal the peer review *culture*, which is much more important to me as an author.

As a consumer of research, I’m glad in a general way that peer review exists to apply some minimal level of filtering to the production of academic or academic-seeming books. But I know the filtering isn’t particularly rigorous: even the best publishers in my field sometimes put out stuff for which I’d have recommended rejection if I’d been a referee. And I wouldn’t want the filtering to be more rigorous: I also know of work that has struggled to get into print because referees have taken fright at its originality. Because peer review is fallible, and because there is ample scope in my discipline for disagreement about what the right peer review decision would be in any particular case, I would never dream of using the general quality of a publisher’s peer review to judge the quality of a particular publication. My sense of the general quality of a publisher’s peer review does have some influence on my decisions about how to allocate effort in getting hold of published material: some publishers are more likely to reward my efforts than others. But those decisions are influenced much more heavily by my sense of a book’s relevance to my current needs – so I rely on information about its contents from the publisher’s website, reviews, etc. If you need to know about the evidence for Menander Rhetor’s commentary on Demosthenes, you’ll read Heath 2006 – but not because you trust the publisher’s peer review policy, nor even because you trust the author’s expertise in late ancient rhetorical theory: those factors may contribute to raising your spirits, but your decision will actually be driven by the fact that need to know about the evidence for Menander Rhetor’s commentary on Demosthenes and Heath 2006 contains the only substantial treatment of the subject since 1883.

From the consumer’s point of view, then, peer review doesn’t matter much to me in practice, though I like to think that I can take it for granted that there has been some filtering and enhancement going on in the background. From the producer’s point of view, what’s most important to me in peer review does not reliably correlate with what is expressed in formal peer review policies. So fixating on those policies in a sense misses the point, and being prescriptive about them carries some risk of detracting from the pursuit of quality (again I agree with Heather).

Malcolm

Hi all

I have really enjoyed the discussion to date, and would like to support
some of the more recent statements made by Malcolm and others.

An issue that Caren Milloy raised – which I would like to highlight with
bells and whistles, is that it is really important to both allow and
encourage new publishing enterprises to emerge. We are in a state of
transition, and I really doubt we have yet seen whatever dissemination
practices will eventually dominate – unless, that is, we allow innovation
to be stifled now.

I am really really afraid of having industry defined standards that
‘acceptable’ publishers have to meet. In almost any industry you wish to
look at these standards rapidly become controlled by established vested
interests and used to stifle innovation and entry. So I shudder at the
thought of any body – especially one made up of existing publishers –
defining an industry standard about what a publication is or should be, or
what ‘acceptable’ practices are – be they peer review, dissemination
techniques, or anything else.

But – as in almost any other industry – there is real social benefit in
having assessment agencies providing users with information about the
reliability and quality of the ‘products’, providing they are run
independently from the producers. So I would support proposals for
validation by agencies such as JISC and DOAB, provided that they are
flexible and open to including new initiatives in their assessment
process, they don’t all coordinate on precisely the same set of criteria,
and grant giving bodies resist the temptation to coordinate on the use of
just one. By reducing information asymmetries these agencies can play an
important role in developing our trust and acceptance of new methods and
practices, and allow us to move away from traditional practices more
easily.

Peer Review and Quality:
The difficulty we face is that not all research is equally good and so we
fall into some reliance on the ‘name’ of a publisher as a signal of the
quality of the publication. This, of course, leads to a vicious cycle with
the publishers with the best reputation attracting the best submissions,
so establishing a powerful position within the industry, and provides a
huge ‘barrier to entry’ for any new or innovative publisher to overcome.
Accreditation of new entrants by JISC or similar organisations can reduce
this reliance on established practices and facilitate the adoption of new
techniques – providing they recognise the role they are playing in
facilitating change and don’t get manipulated by the publishing industry
itself.

But I also feel that any procedural ‘requirement’ for a peer review
process is pretty close to meaningless. Differences in assessment
procedures have been noted by others in this discussion. We all know that
some academic publishers maintain higher standards than others, even if –
procedurally – their peer review process is the same. Similarly, within
single academic presses – the reputation of different disciplines can vary
markedly. The ‘process’ of selection doesn’t guarantee, or even protect,
the quality of the product. So publisher assessment needs to be beyond
something as formulaic as that.

Grant giving bodies:
Grant giving bodies also need to explicitly recognise the important role
they play in facilitating change – and not get trapped into formulaic
responses that can be used to stifle innovation. Requiring that any
publication must come from a specifically defined group of publishers or
‘standards’ would be bad news – especially if acceptance to that select
group is controlled by the publishers themselves.

Similarly, as others have noted, grant giving bodies are in the wonderful
position of being able to force researchers and academics to accept new
practices they may be reluctant to voluntarily adopt – and shouldn’t be
afraid to exercise that power. But to allow innovation they need to be
flexible in their requirements, rather than looking to provide hard and
fast rules. There are many areas where CC-BY licences are the most
socially desirable, and grant bodies may reasonably expect that as the
default licence for research they finance. But there are some areas where
a CC-BY licence may actually damage the quality of the research that can
be undertaken. So – grant givers may want to place CC-BY as a default
expectation, but allow researchers to identify in their proposal what
their dissemination strategy will be and if there are research critical
reasons why CC-BY is not appropriate for some of the research outputs.
Equally the researcher may have valid reasons why dissemination should
occur through a channel not previously recognised by the funding body and
where specific or default requirements are not appropriate. But many of
these issues can be raised by the researcher in the grant application
process – and assessed at that stage.  So my suggestions would be to make
the dissemination strategy an explicit part of the research proposal
(provide default expectations rather than hard and fast rules in the
guidelines) and then judge the proposal as a whole when making funding
decisions.

Thanks
Rupert


Dr. Rupert Gatti
Director
Open Book Publishers
www.openbookpublishers.com
See our latest catalogue at
https://www.openbookpublishers.com/shopimages/LatestCatalogue.pdf


Academic publishers as icecream ve

The ice-cream analogy (http://www.knowledgeunlatched.org/about/business-model/) captures the crucial content/added value distinction splendidly. A real treat – thank you!

Best,

Malcolm

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: