I have been going to The Charleston Conference for 7 or 8 years now and it remains one of my very favorite conferences. This year the in-person conference took place Nov. 7-10 (in Charleston, SC – thus the name) and the virtual one took place Nov. 27th – 30th. Many of the virtual conference sessions were recordings of sessions that happened in person, but not all. I recorded a panel presentation that was only virtual, for example. If any of these sessions I discuss below are of interest to you let me know as they often make the recordings available to non-conference attendees in the spring.

Three overarching themes for me at the conference were 1) Research integrity 2) Artificial Intelligence and 3) Novel approaches to demonstrate research impact. These are all very broad topics with nuance and intersections but offered a lot of food for thought. Below are some of the session and vendor highlights – others who went will chime in with other notes I’m sure.

The opening keynote was a panel of the top brass at Elsevier, Clarivate, and Springer moderated by JSTOR/Ithaka. It was more interesting than I had hoped with a few nuggets I jotted down:

  • Elsevier started tracking female representation on journal editorial boards in 2020. When they started they were at just 16% but they have gotten it up to 30% in 3 years.
  • Springer has seen an increase in use of 80% over the last 4 years.
  • China is adding new universities and new libraries (one of the few places in the world adding them) and they are now surpassing all other countries in terms of research output. They put out 750,000 articles last year to the EUs 500k and the US’s 450K
  • In 2022 40% of Elsevier’s articles and 50% of Springer’s were OA
  • In order to protect their research reputation China is prohibiting faculty from publishing in predatory journals and article mills.

I attended a session on Book challenges and bans in higher education and it raised some interesting questions about whether higher education institutions are really as immune from these issues as we sometimes think. The overarching conclusion was that you should have a policy about book challenges in place BEFORE one comes at you. Thanks to Kathy’s great work with the Collections Executive Committee, ZSR already has a reconsideration policy in place. This session also made the point that not all book challenges are censorship – sometimes books and other materials have information that is outdated and even harmful and so our knee-jerk reaction should not ALWAYS be to resist reconsidering materials for our collection.

Another really thought provoking session I attended was all about research integrity. Three interesting points came up:

  • In Higher Education we leave research integrity up to the individual researchers on pretty much an honor system and that opens labs and institutions up to risk and compliance issues when they have to comply with state or federal or grant requirements or mandates. Often the tenure and promotion incentives in US universities mean the temptation to fudge or falsify results is pretty appealing.
  • How can we incorporate discussions of research integrity in Information Literacy classrooms across the disciplines? Especially relevant if we are teaching majors or graduate students.
  • Open Access models means we are paying for research to be published rather than paying for published research, so how is our gatekeeper role changing? If more publications get us closer to a research status we want (like the new R1 system) then are incentives to put out more research going to risk putting out worse research?

One last session I went to before I get to the vendor stuff was one about retractions in scholarly publishing. Again, a few morsels of food for thought:

  • Are we looking at the research in our Institutional Repositories and marking it as retracted if it gets retracted by the journal that published it? If not, what happens if someone finds the article through our IR and not via the journal that has it clearly marked as retracted? What about other places the article might be linked? Preprint servers, ResearchGate, conference proceedings, university blog posts, etc.
  • A 2021 study in Quantitative Science Studies looked at the continued use and citation of retracted research after the retraction and the shockingly low percentage (5.4%) of places where the retraction was mentioned.
  • Because we leave the research integrity piece in the hands of researchers, institutions are relatively unscathed by retractions. This often can include publishers or academic associations. Should there be more pressure on these places to make sure bad research never makes it into print? The incentive structure does not work this way currently.

One of the main reasons I love the Charleston Conference so much is that you get a lot of time to talk to and hear from vendors – either at the very egalitarian vendor showcase on the first day (where every vendor get the same size table), or at sessions or meals. Here were some of the most exciting and interesting product takeaways:

  • There is a new-to-the-US product called Overton that has policy documents from all over the world and helps faculty and institutions determine where their research has been used in the policy-making process. This is a fascinating way for us to demonstrate research impact. Sage is soon to be releasing a free overlay to Overton where faculty can search their names and get a list of where they have been cited in policy. The more complex features and the full database of policy documents would be available for institutions that subscribe.
  • Paratext has a new database of databases called BIRD of Paratext. It’s REALLY interesting. It lets you search on a discipline and see the list of databases that are tagged as being relevant to that discipline. It includes for-fee databases and free databases. It’s a one-time purchase model. I had them do “Public Administration” as a discipline, for example, since SPS is proposing a MA, and it brought up a list of PA databases. If we purchased it we can connect it to our FAD list and it can bring up databases that we don’t have. In the future they also want to add the capability for schools to compare their holding to those of other schools. So we could identify schools similar to us that have a program we are starting or hoping to support more and see what databases they have that we don’t.
  • Lots of folks are bringing text analysis and/or teaching with text analysis to their platforms. JSTOR’s Constellate Gale Digital Scholar’s Lab (which will now be included in some Gale subscription platforms), and others.
  • PolicyMap is adding Social Determinants of Health data collection. This is a cluster of non-medical data points that have been identified as influencing health outcomes. The data is not new – but the grouping of it is.
  • New primary source collections of interest from around the vendorsphere:

Eastview Press Global Press Archive has some really intriguing content from the Global South, the former Soviet Union, and central Eurasia. These are not only foreign language titles but also English language newspapers from these regions both historical and more current.

All in all a fabulous conference!!