AI Usage Policy
1. General Principle
CMES (Center of Middle Eastern Studies) recognizes that Artificial Intelligence (AI) and machine-learning tools (e.g., ChatGPT, Gemini, Copilot, Grammarly, etc.) can assist research and writing processes. However, AI cannot replace human authorship, accountability, or critical judgment.
Any use of AI in preparing manuscripts must uphold principles of transparency, integrity, and responsibility consistent with COPE Core Practices.
2. Prohibition of AI Authorship
AI tools and automated systems cannot be listed as authors or co-authors.
Authorship implies accountability for originality, intellectual contribution, and ethical responsibility—qualities that AI systems do not possess.
Only human contributors who meet authorship criteria (conceptualization, analysis, writing, approval) can be credited as authors.
3. Disclosure and Transparency
Authors must explicitly disclose in the manuscript if and how AI tools were used, including:
1. the name and version of the tool,
2. the purpose of use (e.g., grammar correction, data analysis, translation, image generation, text summarization), and
3. the extent of human oversight.
A statement such as the following must be included in the Acknowledgments or Methods section:
“Portions of this manuscript were assisted by [AI tool name, version], used for [describe purpose]. All interpretations, analyses, and conclusions are the author’s own responsibility.”
Failure to disclose AI use is considered a breach of publication ethics.
4. Integrity and Verification
Authors are fully responsible for:
1. the accuracy, originality, and truthfulness of all AI-generated or AI-assisted content,
2. verifying that AI output does not contain fabricated data, false citations, or plagiarized material, and
3. ensuring AI tools have not accessed confidential or restricted data.
Editors may request the submission of AI-prompt logs or methodological explanations when necessary to verify transparency.
5. Ethical and Legal Compliance
The use of AI must not:
1. compromise data privacy or participant confidentiality,
2. involve unauthorized use of copyrighted or personal data,
3. generate or manipulate images, audio, or text in a deceptive way,
4. be used to fabricate or falsify research results.
AI applications for predictive legal analysis, case generation, or text summarization must comply with data-protection law (including Indonesia’s Law No. 27 of 2022 on Personal Data Protection) and academic integrity standards.
6. Editorial and Peer Review Integrity
AI tools must not be used by editors or reviewers to:
1. read, summarize, or evaluate confidential manuscripts, or
2. generate review reports.
Peer review relies on expert human judgment; any use of AI by reviewers must be declared to the Editor-in-Chief and approved before use.
Editors will apply AI-detection tools only to support, not replace, human assessment of originality and language quality.
7. Data Management and Reproducibility
If AI or machine-learning models are part of the research method, authors must provide:
1. sufficient detail for reproducibility (model type, parameters, training data sources, validation process),
2. ethical approval (if human or legal data were used), and
3. a statement on data provenance and bias mitigation.
Datasets or algorithms trained on sensitive legal cases, personal data, or court documents must ensure anonymization and legal compliance.
8. Sanctions for Misuse
Undisclosed or unethical use of AI (e.g., ghostwriting, text fabrication, deepfake imagery, or citation falsification) constitutes scientific misconduct.
Sanctions include rejection before publication, retraction after publication, or notification to the author’s institution following COPE flowcharts.
9. Continous Monitoring
Editors and reviewers will receive training on identifying and responsibly integrating AI tools in research assessment.
Model Disclosure Example for Authors
AI-Use Disclosure: The authors used OpenAI’s GPT-5 (October 2025 version) to refine the English grammar of this manuscript. The authors reviewed, verified, and are responsible for all final content.
CMES (Center of Middle Eastern Studies) will periodically review its AI-ethics policy to adapt to evolving standards of COPE, WAME, and the Indonesian Committee on Publication Ethics (KNEP).
Editors and reviewers will receive training on identifying and responsibly integrating AI tools in research assessment.









