Artificial Intelligence (AI) Usage Policy

1. General Principle

AKUMULASI recognizes that Artificial Intelligence (AI) and machine-learning tools, such as ChatGPT, Gemini, Copilot, Grammarly, and similar systems, can assist research and writing processes. However, AI cannot replace human authorship, accountability, or critical judgment.

Any use of AI in preparing manuscripts must uphold principles of transparency, integrity, and responsibility consistent with COPE Core Practices.

2. Prohibition of AI Authorship

AI tools and automated systems cannot be listed as authors or co-authors. Authorship implies accountability for originality, intellectual contribution, and ethical responsibility, qualities that AI systems do not possess.

Only human contributors who meet authorship criteria, including conceptualization, analysis, writing, and final approval, can be credited as authors.

3. Disclosure and Transparency

Authors must explicitly disclose in the manuscript if and how AI tools were used, including:

  • The name and version of the tool,
  • The purpose of use, such as grammar correction, data analysis, translation, image generation, or summarization,
  • The extent of human oversight and revision.

A statement such as the following should be included in the Acknowledgments or Methods section:

Portions of this manuscript were assisted by [AI tool name, version], used for [describe purpose]. All interpretations, analyses, and conclusions are the author’s own responsibility.

Failure to disclose AI use is considered a breach of publication ethics.

4. Integrity and Verification

Authors are fully responsible for:

  • The accuracy, originality, and truthfulness of all AI-generated or AI-assisted content,
  • Verifying that AI output does not contain fabricated data, false citations, or plagiarized material,
  • Ensuring AI tools have not accessed confidential or restricted data.

Editors may request AI prompt logs or methodological explanations when necessary to verify transparency.

5. Ethical and Legal Compliance

The use of AI must not:

  • Compromise data privacy or participant confidentiality,
  • Involve unauthorized use of copyrighted or personal data,
  • Generate or manipulate images, audio, or text in a deceptive way,
  • Be used to fabricate or falsify research results.

AI applications for predictive accounting analysis, financial forecasting, decision support, or text summarization must comply with applicable data protection law, including Indonesia’s Law No. 27 of 2022 on Personal Data Protection, and academic integrity standards.

6. Editorial and Peer Review Integrity

AI tools must not be used by editors or reviewers to:

  • Read, summarize, or evaluate confidential manuscripts,
  • Generate peer review reports.

Peer review relies on expert human judgment. Any use of AI by reviewers must be declared to the Editor-in-Chief and approved before use.

Editors may apply AI-detection tools only to support, not replace, human assessment of originality and language quality.

7. Data Management and Reproducibility

If AI or machine-learning models are part of the research method, authors must provide:

  • Sufficient detail for reproducibility, including model type, parameters, training data sources, and validation process,
  • Ethical approval, where human or sensitive financial data were used,
  • A statement on data provenance and bias mitigation.

Datasets or algorithms trained on sensitive financial records, personal data, or proprietary documents must ensure anonymization and legal compliance.

8. Sanctions for Misuse

Undisclosed or unethical use of AI, including ghostwriting, text fabrication, deepfake imagery, or citation falsification, constitutes scientific misconduct.

Sanctions may include rejection before publication, retraction after publication, or notification to the author’s institution following COPE flowcharts.

9. Continuous Monitoring

Editors and reviewers will receive periodic training on identifying, evaluating, and responsibly integrating AI tools in research assessment.