Working towards an AI policy for SWIB: an appeal to the SWIB community

Dear SWIB community members!

We do have community guidelines for this forum and for the conference (Community Guidelines - SWIB Forum) but we do not have an explicit policy for the use of the newest generation of AI methods (in particular what I like to call “Large AI” which is mostly generative AI).

There are various areas and aspects to the organization of a conference and forum where the use of AI-based tools and the conditions for that use can be put up for discussion. There is the generation of abstracts by authors, for example, or the generation of reviews for submissions (we are obviously not going to automate that!), and then of course there is the content of proposals itself.

In a sense, SWIB has always been about AI since semantic technologies are a subfield of AI (symbolic AI) which has been somewhat eclipsed recently by the current hype surrounding deep learning (which is a subfield of machine learning which in turn is another subfield of AI: subsymbolic AI). We did add machine learning to the list of topics some years back because combining the two approaches seemed worthwhile exploring. Also note that SWIB has always been about open source technologies, so SWIB has never given and will not give a platform to commercial solutions, and that includes generative AI tools.

Large AI is putting an enormous strain both on the climate and on society and that is obviously at odds with SWIB’s stance on sustainability. However, approaches to make that footprint smaller and to work with AI models that are as open source as possible, can be run locally, and adhere to ethical conditions could very well fall into the range of SWIB topics in the future.

In any case, we would want to be as transparent as possible concerning our AI policy for SWIB and that starts with the process how to get to an explicit AI policy – we want to involve you, the community. So get in touch with us and tell us what you think and what would be important to you!

Looking forward to a lively discussion here in the forum!

Link to an example from another conference: AI Policy

Thanks, Argie, for starting this discussion. I would very much welcome it if SWIB were to establish a policy re. “AI”. The FluConf policy, to which you linked, could provide an excellent starting point. I’d suggest two additions: Since many institutions in our field are hit hard by reckless LLM crawlers, that should be included explicitely among the negative impacts. And among the strongly discouraged content, perhaps ai-generated images on slides could be mentioned, too.

The larger issue - how the forced proliferation of genAI hurts science, education and libraries in general - should also be dealt with in a forum such as SWIB. Perhaps that could be the subject for a SWIB keynote in future years?

Cheers, Joachim

2 Likes

I’m so glad to see SWIB thinking seriously about this! Straw-persons for consideration:

  • Any submission documenting use of commercial generative AI should receive an immediate rejection without benefit of peer review. This includes text, image, and code generators. Obviously this stricture should be enshrined in policy and documented in submitter instructions.
  • Slides, text, and (if any?) abstracts/papers should not use generative AI in any way.
  • Machine learning that is not generative AI is in-bounds, but submitters should address questions of ethics (e.g. data sources for training, minimal computing, use of undercompensated labor) in their abstracts and presentations. Reviewers should be alerted to downgrade scores on submissions with questionable ethics.

I would be delighted to see that keynote, Joachim. I’d even give it, if I could do so remotely, but you can likely do better – I’d suggest Alex Hanna or Emily Bender.

2 Likes