As the use of Artificial Intelligence (AI) tools in academic publishing continues to evolve, the Journal of Fundamental and Applied Research closely monitors developments in this area.I-EDU  GROUP, the publisher of the journal, reviews and updates these policies as necessary to ensure ethical standards, transparency, and research integrity are maintained. This policy outlines the guidelines for the use of AI in authorship, image creation, and peer review, ensuring that all stakeholders are aligned with best practices and legal considerations.


1. AI Authorship

Large Language Models (LLMs) like ChatGPT do not meet the journal’s authorship criteria, as authorship carries accountability for the work, which cannot be applied to AI tools. Authors are responsible for the content and integrity of their manuscripts. Therefore, LLMs, while useful for generating text or assisting in editing, should not be considered authors of the work.

Guidelines for AI Use in Authorship:

  • If an LLM or another AI tool is used in the research process, such as for drafting or revising sections of the manuscript, it must be clearly documented in the Methods section or, if no Methods section is available, in another appropriate part of the manuscript.
  • AI-assisted copy editing—defined as AI-assisted improvements to human-generated texts for readability, style, grammar, spelling, punctuation, and tone—does not need to be disclosed. However, authors must take full responsibility for the final version of the text, and the edits must align with the authors’ original work.
  • There must be human accountability for the final version of the text, meaning authors must approve and ensure that the work reflects their intellectual contributions.

AI tools can assist with generating content or revising drafts, but authorship remains a human responsibility.


2. Generative AI Images

Generative AI image creation, while a rapidly evolving field, presents legal and ethical challenges, particularly regarding copyright and research integrity. I-EDU GROUP adheres strictly to existing copyright law and ethical guidelines. Due to the current ambiguity in legal frameworks concerning AI-generated images, the journal does not permit the publication of generative AI images unless specific exceptions apply.

Exceptions:

  • Images or visuals obtained from agencies that have legally acceptable image creation practices under contractual agreements.
  • Images or videos that are specifically discussed in articles about AI itself. These cases will be reviewed on an individual basis.
  • The use of generative AI tools that are based on scientifically verifiable data and can be attributed, checked for accuracy, and comply with ethics and copyright restrictions.

All exceptions must be clearly labeled as AI-generated in the image field.

Note: This policy applies to all types of images, including video, animations, photography, scientific diagrams, photo-illustrations, editorial illustrations (such as cartoons or drawings), and 2D/3D visual representations. It does not apply to non-image, text-based items like tables, charts, or simple graphs.

AI Image Manipulation: If AI tools are used to manipulate, enhance, or combine existing images or figures, this must be disclosed in the relevant figure caption. This ensures transparency and allows for a case-by-case review.


3. AI Use by Peer Reviewers

Peer reviewers are an essential part of the scientific publishing process, providing valuable expertise and assessments that guide editorial decisions. While the use of generative AI tools has advanced rapidly, these tools have significant limitations that make them unsuitable for evaluating manuscripts without human oversight. AI tools may lack up-to-date knowledge and can produce biased, nonsensical, or incorrect information. Furthermore, manuscripts often contain sensitive or proprietary data that should not be shared outside the review process.

Guidelines for Peer Reviewers:

  • AI in Peer Review: Peer reviewers are advised not to upload manuscripts into generative AI tools for evaluation. Generative AI tools should not be relied upon to assess the quality, accuracy, or integrity of the submitted work.
  • Disclosure of AI Use: If a peer reviewer has used any AI tool to assist in evaluating the manuscript, they must declare this in their peer review report. Full transparency about the use of AI tools ensures that the peer review process remains trustworthy and unbiased.

The journal is exploring ways to safely provide peer reviewers with access to AI tools that could assist in their work, but these tools should not replace human judgment or oversight in the peer review process.

ISSN 2181-3329 (Print)
ISSN 2181-3205 (Online)