AI Policy

At  Journal of Political Science: Bulletin of Yerevan University (JOPS), we recognize that generative AI tools are increasingly shaping how research is written, translated, and shared. To uphold integrity, transparency and responsible scholarship, we have adopted the following principles for the ethical use of AI-assisted technologies. These are informed by the best practices of leading publishers such as Elsevier and the Committee on Publication Ethics (COPE) but adapted to reflect JOPS’s own editorial values and disciplinary context.

AI Tools Cannot Be Listed as Authors

Tools like ChatGPT, Gemini, Claude, and other AI systems cannot be named as authors under any circumstance. Authorship requires original intellectual input, responsibility for the work and adherence to ethical and academic norms. Because AI tools lack independent reasoning and accountability, they do not meet these requirements. Only individuals who have genuinely contributed to the research and writing process may be credited as authors in JOPS publications.

Disclosure of AI Use

Authors must openly state any use of generative AI or AI-assisted tools in the preparation of their manuscripts. This applies to all uses, including writing or editing text, translating materials, generating visuals or data or suggesting references. Such information should be mentioned clearly in the Acknowledgements or Methods section

Example of a disclosure:

“The authors used OpenAI’s ChatGPT to help refine the wording of the Introduction. All text produced by the tool was reviewed and verified by the authors.”

Human Oversight and Accountability

AI can support research and writing, but full responsibility for the accuracy and integrity of the manuscript remains with the human authors. Any AI-generated material must be carefully checked for factual accuracy, originality and ethical soundness. JOPS holds authors fully accountable for any mistakes, inaccuracies or ethical concerns arising from the use of AI tools.

Misuse of AI

JOPS enforces a zero-tolerance policy toward the unethical use of AI. The following actions are strictly forbidden:

  • Fabricating or altering research data or results using AI tools.
  • Creating fake or unverifiable citations.
  • Manipulating images, figures, or data in misleading ways.
  • Using AI to impersonate reviewers, generate fraudulent peer reviews, or interfere with the editorial process.

If such practices are detected before or after publication, the manuscript will be rejected or retracted, and relevant institutions or authorities will be notified.

Reviewer Use of AI

Peer reviewers may not use generative AI tools to draft or edit their reports unless the editor has granted explicit permission. If permission is given, reviewers must safeguard the confidentiality of the manuscript and take full personal responsibility for the content of their review. Any use of AI must be disclosed to the editor upon submission.

Ongoing Review and Further Resources

JOPS encourages everyone involved in the publication process - authors, reviewers and editors -to stay informed about the ethical and practical implications of AI in academia. For further guidance, we recommend consulting:

This policy will be regularly reviewed and updated as new technologies, standards and ethical challenges emerge.