JPMS Policy on Generative Artificial Intelligence

Purpose and Scope

This policy defines the standards of Lumina Quest Publishing regarding the use of generative artificial intelligence (AI) and AI-assisted tools throughout the scholarly publishing process.

The policy applies to authors, peer reviewers, editors, and editorial staff across all journals published by Lumina Quest Publishing and aims to ensure that AI technologies are used ethically, transparently, and responsibly, without compromising research integrity or publication ethics.

This policy aligns with international best practices and recommendations, including:

The objective is to prevent misuse of AI while allowing limited, clearly disclosed, and responsible use that improves clarity and efficiency without undermining the originality, accountability, or credibility of scholarly work.

Definition of Generative AI

For the purposes of this policy, generative artificial intelligence refers to any AI system capable of producing content—such as text, images, data, code, or graphics—in response to prompts or inputs.

Examples include, but are not limited to:

  • Large language models and chatbots
  • Text, image, and code generation tools
  • AI-assisted writing or editing systems

(e.g., ChatGPT, GPT-4, Bard, Copilot, DALL·E, and similar technologies).
All references to “AI tools” in this policy specifically concern such generative systems.

Acceptable Use of AI Tools by Authors

Authors may use generative AI tools only in a limited, supportive role, provided such use does not replace the authors’ own intellectual contribution or responsibility.

Permissible uses include:

  • Language editing and proofreading Improving grammar, spelling, clarity, and readability, provided the scientific meaning is not altered.
  • Formatting and copy-editing assistance Supporting reference style consistency, layout adjustments, or minor text restructuring.
  • Data analysis and visualization support Generating code, scripts, or preliminary visualizations using the authors’ own data, provided all outputs are fully verified by the authors.
  • Idea organization and literature structuring Assisting in organizing known literature or brainstorming research directions, while ensuring that final interpretations, arguments, and conclusions are developed independently by the authors.

In all cases, authors must critically review, edit, and validate AI-assisted outputs and remain fully responsible for the accuracy, integrity, and ethical compliance of the manuscript.

Prohibited Uses of Generative AI

The following uses of generative AI are strictly prohibited:

  • Substitution for original scholarly work Delegating the writing of substantive manuscript sections (e.g., Methods, Results, Discussion, Conclusions) to AI tools. Core scientific reasoning and interpretation must originate from human authors.
  • Fabrication of data or references Using AI to invent data, experiments, patient cases, images, references, citations, or DOIs.
  • Plagiarism or misappropriation Using AI to paraphrase or translate content from existing sources without proper citation. AI-generated text must not be presented as original work without attribution.
  • Misrepresentation of authorship Listing AI tools as authors or co-authors, or attributing scientific judgment or decision-making to AI systems.

Any undisclosed, deceptive, or inappropriate use of AI constitutes research and publication misconduct.

Disclosure and Transparency Requirements

Full transparency is mandatory.
If generative AI tools are used at any stage of manuscript preparation, authors must:

  • Disclose AI use in the cover letter at submission
  • Include a clear AI use statement in the manuscript (e.g., Acknowledgements, Methods, or a dedicated “Use of AI Tools” section)

Example Disclosure Statement:
“The authors used the generative AI tool ChatGPT (OpenAI) to assist with language editing. All AI-generated suggestions were reviewed and edited by the authors, who take full responsibility for the content of this manuscript.”
Minor use limited to basic spell-checking or grammar correction may not require disclosure; however, authors are strongly encouraged to disclose AI use whenever uncertainty exists.

Authorship and Author Responsibility

Generative AI tools cannot be listed as authors or co-authors.
Authorship requires:

  • Substantial intellectual contribution
  • Public accountability for the work
  • Ability to respond to critiques and ethical concerns

These criteria apply only to human contributors, in accordance with ICMJE authorship guidelines.
The use of AI does not reduce author responsibility. Authors remain fully accountable for:

  • Accuracy and completeness of content
  • Integrity of data and analyses
  • Absence of plagiarism, bias, or fabrication

Use of AI in Peer Review

Manuscripts under review are strictly confidential.

Peer reviewers must not:

  • Upload manuscript content to external AI tools
  • Use AI to generate, rewrite, or substantially edit review reports
  • Share confidential materials with AI systems outside editorial control

Reviewers must provide independent, expert human judgment. Any limited use of AI (e.g., grammar checking of reviewer comments) must not compromise confidentiality, and reviewers remain fully responsible for their evaluations.

Use of AI in Editorial Workflows

Editors and editorial staff must preserve confidentiality and independence of editorial judgment.

Accordingly:

  • Unpublished manuscripts, reviews, and editorial decisions must not be uploaded to generative AI systems
  • AI tools must not be used to make acceptance, revision, or rejection decisions
  • Non-generative tools may be used for technical checks (e.g., plagiarism detection, formatting review), provided data protection and ethical standards are maintained

Screening and Verification of AI Use

Lumina Quest Publishing may use editorial assessment and analytical tools to:

  • Detect suspected AI-generated or AI-assisted content
  • Identify fabricated data or references
  • Verify consistency and integrity of submissions

If undeclared or inappropriate AI use is suspected, the editorial office may:

  • Request clarification from authors
  • Ask for original data, analysis files, or draft versions
  • Initiate additional ethical review

Consequences of Policy Violations

Violations of this policy are treated as research or publication misconduct.

Possible actions include:

  • Before publication: manuscript rejection
  • After publication: correction, expression of concern, or retraction, following COPE guidelines
  • Notification of authors’ institutions or funders, where appropriate
  • Temporary or permanent restriction of future submissions

Policy Review and Updates

Given the rapid evolution of AI technologies, Lumina Quest Publishing will periodically review and update this policy to reflect:

  • Technological developments
  • Emerging best practices
  • Updated guidance from COPE, ICMJE, and the scholarly community

Revisions will be publicly communicated on the publisher’s website. Stakeholders are encouraged to consult the most recent version of this policy.