• ukrlingmed@ukr.net
  • +38 (044)-279-18-85
  • Print ISSN 3083-6204
  • e-ISSN 3083-6212
» ARTIFICIAL INTELLIGENCE (AI) POLICY

ARTIFICIAL INTELLIGENCE (AI) POLICY

The editorial board of the journal Ukrainian Linguistic Medieval Studies recognizes that Artificial Intelligence (AI) and AI-based tools can be valuable aids in the preparation of scholarly publications. At the same time, the application of such technologies must be grounded in the principles of academic integrity, scientific responsibility, and transparency.

The journal’s policy has been developed in accordance with the “Recommendations on the Use of AI in Scholarly Communication”  by the European Association of Science Editors (EASE). The editorial board strives to maintain a balance between innovative methods and academic integrity, ensuring transparency and trust in scientific publications, as well as compliance with international standards of publication ethics as recommended by COPE, WAME, and the JAMA Network.

 

PERMISSIBLE USES OF AI

Authors may use AI technologies during the manuscript preparation process only as supporting tools, specifically for:

  • literature searches and analytical materials (subject to mandatory subsequent human verification of sources);
  • improving writing style and linguistic accuracy of the text;
  • creating graphic materials (charts, diagrams, visualizations), provided their use does not distort the scientific content and does not infringe on the copyrights of original images;
  • translation and technical editing of the text;
  • formatting the manuscript according to the journal’s requirements.

 

UNACCEPTABLE USES OF AI

The use of AI is strictly prohibited if it compromises the originality, reliability, or scientific integrity of the work. Specifically, the following are unacceptable:

  • Generation of scholarly content. The presentation of theoretical frameworks, descriptions of methodology, interpretation of results, or formulation of conclusions must be the product of the author’s intellectual labor. Any use of AI in these areas without proper human oversight is inadmissible.
  • Citations and literature reviews. The editorial board strongly advises against using AI-generated output as a primary source to support specific claims, as AI-provided citations and information may be inaccurate or fabricated. Authors bear full responsibility for the accuracy of this information and the proper use and citation of sources.
  • Masking plagiarism through paraphrasing or automated rewriting of others’ texts.
  • Listing AI as an author or co-author, as well as citing AI as an author. AI cannot be credited as an author or co-author because it cannot be held accountable for the accuracy, integrity, and originality of the work, does not hold copyright, and cannot assume legal obligations.

 

DISCLOSURE REQUIREMENTS

Authors are required to explicitly declare the use of AI tools or AI-supported technologies during the preparation of their manuscript. Failure to disclose such use is considered a violation of academic integrity.

A separate section titled “AI Use Declaration” must be included before the References, clearly specifying the name of the tool and the purpose of its application. The description must be as transparent as possible. For example:

During the preparation of this article, the authors used Grammarly to improve the writing style and correct linguistic errors.

During the research process, the authors used ChatGPT-4o to conduct a literature search. All suggested sources were subsequently verified and analyzed by the authors. The AI Assistant (Adobe Acrobat Online) was also used for document processing.

 

AUTHOR RESPONSIBILITY

The use of Artificial Intelligence technologies must occur under continuous human supervision and control. Authors are required to thoroughly verify and edit all AI-generated outputs, as AI may produce content that appears convincing but is factually incorrect, incomplete, or biased.

Authors bear full responsibility for the content, reliability, and originality of all materials prepared using AI. The application of AI does not exempt authors from the obligation to ensure scientific novelty, data accuracy, and strict adherence to ethical standards.

Non-compliance with this policy – specifically concealing the use of AI or submitting unverified automated content – is grounds for the rejection of the manuscript during the peer-review stage or its retraction after publication.

 

REVIEWER RESPONSIBILITY

Peer review is the foundation of the scientific ecosystem, requiring critical thinking and original evaluation—qualities inherent only to humans. The Editorial Board adheres to the highest standards of integrity and establishes the following rules for reviewers:

  1. Confidentiality of Manuscripts. A submitted manuscript is a confidential document. Reviewers are strictly prohibited from uploading the text of an article, or any part thereof, into generative AI tools. Doing so may lead to: violations of copyright and ownership rights to unpublished data; leakage of confidential and personal information; and unauthorized training of AI models on the authors’ intellectual property.
  2. Confidentiality of Review Reports. The requirement for confidentiality also extends to the reviewer’s report. Reviewers must not use AI to write or edit their conclusions (even for grammar or style correction), as this may lead to the disclosure of confidential information about the manuscript or its authors.
  3. Human-Only Expert Evaluation. Reviewers must not use AI technologies to generate expert opinions on an article. Scholarly evaluation implies a level of responsibility that only a human can bear. The use of AI creates risks of producing incorrect, incomplete, or biased results, which discredits the independent peer-review process.

The reviewer bears full personal responsibility for the content and objectivity of the provided review.

 

EDITORIAL RESPONSIBILITY

Editorial management and the evaluation of scholarly manuscripts involve a high level of professional responsibility that can only be borne by a human. The decision-making process must be transparent, unbiased, and protected from external AI interference.

  1. Confidentiality of Manuscripts. A submitted manuscript is a confidential document. Editors are strictly prohibited from uploading the text of an article, its parts, or authors’ personal data into generative AI systems. This is due to risks of: unauthorized use of authors’ intellectual property for training neural networks; breach of confidentiality and leakage of personal information; and loss of control over ownership rights to unpublished materials.
  2. Confidentiality of Correspondence. The requirement for confidentiality extends to all stages of correspondence. Editors must not use AI to write or edit decision letters, as they contain confidential expert conclusions and personal data. Using AI to improve the language of such messages is also inadmissible due to the risk of information disclosure.
  3. Human-Only Decision Making. AI technologies cannot be used to evaluate manuscripts or make editorial decisions. Critical analysis and original assessment of scientific quality are beyond the capabilities of AI. The editor bears personal responsibility for the entire editorial process, the objectivity of the final decision, and the content of communications sent to authors.

If an editor has a reasonable suspicion of a violation of this Policy by an author or reviewer, they are obliged to formally notify the Editorial Board for further investigation in accordance with COPE (Committee on Publication Ethics) standards.