Artificial Intelligence Use Policy

POLICY ON THE USE OF AI AND AI-SUPPORTED TECHNOLOGIES

1. GENERAL PROVISIONS

1.1. The Editorial Office recognizes the potential of artificial intelligence (AI) as an auxiliary tool, but emphasizes the "Human-in-the-loop" principle. This means that final responsibility for the interpretation of social contexts, ethical assessment, and scientific novelty rests exclusively with the human author.

1.2. This Policy applies to all stages of manuscript preparation: from data collection and analysis to text editing and the creation of visualizations. The use of AI is permissible only where it does not undermine the originality and academic integrity of the work.

1.3. This Policy is based on the principles and recommendations of the following international organizations:

1.4. The Editorial Office recognizes that artificial intelligence technologies are developing rapidly, creating new ethical and technical challenges for scholarly communication. Accordingly, the Editorial Office reserves the right to make changes and additions to this Policy without prior notice to authors in order to ensure its compliance with current international standards (COPE, EASE, WAME) and the requirements of academic integrity. Authors are advised to check the current version of the Policy on the journal website immediately before submitting a manuscript.

 
2. AUTHOR STATUS AND RESPONSIBILITY

2.1. Generative AI systems (LLMs) do not meet the criteria for authorship. Authorship presupposes the ability to interpret results, approve the final version of the manuscript, and bear legal responsibility for its content. Since AI does not have legal personality, it cannot be listed as an author or co-author of a scientific publication.

2.2. The human author bears personal responsibility for:

  • Data reliability: Every fact, figure, or statement verified or generated with the assistance of AI must be verified by the author against primary sources.
  • Absence of plagiarism: The use of AI-generated text without proper disclosure, or the use of AI to paraphrase the works of others without citation, is regarded as plagiarism.
  • Absence of "hallucinations": References to non-existent sources or errors produced by AI algorithms do not exempt the author from responsibility for disseminating unreliable information.
  • Prevention of any bias: researchers must take into account algorithmic bias toward certain social groups and representatives of gender diversity.

2.3. Authors are required to critically evaluate any results obtained from AI. Authors must guarantee that:

  • The theoretical conclusions and conceptual frameworks of the study are the result of their own intellectual inquiry, rather than a mere algorithmic compilation.
  • The ethical aspects of the research (especially when working with human participants) have been analyzed personally by the author.

2.4. Authors must not upload confidential data into generative AI systems (for example, unpublished interviews or respondents' personal data), as this may result in privacy violations and information leakage into model training databases.

 
3. PERMITTED AND PROHIBITED USE OF AI

3.1. AI may be used exclusively as a technical support tool at stages that do not involve the creation of new scientific knowledge. Such areas include:

  • Language and stylistic editing: improving sentence structure, correcting grammatical and spelling errors, and translating text (provided that the author must verify the accuracy of the terminology).
  • Technical data processing: coding open data, formatting reference lists in accordance with standards, and assisting in writing software code for statistical analysis.
  • Search query support: using AI to generate keywords or structure search queries in scientometric databases.
  • Brainstorming: assisting in structuring ideas at the initial stage of concept development (without incorporating generated ideas directly into the article text as research results).

3.2. Any actions that delegate the creation of an intellectual product to AI are considered a violation of scientific ethics. These include:

  • Generation of scientific hypotheses and conclusions: formulating key theoretical propositions or generalizations with the assistance of AI.
  • Writing substantive parts of the text: generating the introduction, literature review, analysis of results, or discussion.
  • Falsification of the empirical basis: using AI to create fictitious survey results, interviews, transcripts, or statistical data.
  • Automatic paraphrasing (Paraphrasing): using AI to rewrite the works of others in order to circumvent plagiarism detection systems.
  • Verification of sources: using AI to confirm facts without cross-checking against original archival or literary sources (due to the high risk of generating non-existent references).

The use of AI outputs as primary sources to support scientific claims is also prohibited. AI may assist in processing, but it cannot serve as an "authoritative voice" in scholarly discussion.

3.3. Restrictions on content generation do not apply in cases where the use of AI itself, its algorithms, or the results of its activity constitute the direct object of research (for example, discourse analysis of neural networks in the media). In such cases, all generated materials must be presented as quotations or appendices with clear labeling.

 
4. VISUALIZATION AND GRAPHIC DATA

4.1. It is prohibited to use AI to generate graphs, charts, or maps based on fictitious or unverified statistical datasets. Any visualization must include a clear reference to the primary source of the empirical data. The use of AI for artificial "cleaning" of data or concealment of statistical anomalies is regarded as scientific fraud.

4.2. Any image (infographic, conceptual model, reconstruction) created or substantially edited with the assistance of AI (e.g., Midjourney, DALL-E, Canva AI) must include a caption specifying:

  • the name and version of the tool;
  • the date of image generation (since algorithms change dynamically);
  • authorship of the input parameters.

Example: "Fig. 1. Model of social interaction. Generated using Midjourney v.6.1 (date of access: 05.03.2026) based on the author's input parameters."

4.3. It is strictly prohibited to use AI to "enhance," restore, or alter archival documents, photographs, archaeological artifacts, or other primary sources that serve as evidentiary material. Any AI processing of such objects (for example, colorization or sharpness enhancement) must be clearly declared as a reconstruction rather than an original source.

4.4. Authors must guarantee that the use of AI to create illustrations does not infringe the copyright of third parties (on whose works the model was trained) and complies with the licensing policy of the selected AI service. The Editorial Office bears no responsibility for intellectual property claims relating to generated images.

4.5. Authors are required to retain the original datasets and text prompts used for generation. The Editorial Office or reviewers have the right to request these materials in order to verify the scientific validity of the visualization.

 
5. DISCLOSURE REQUIREMENTS

5.1. Authors are required to openly disclose the use of AI tools at any stage of manuscript preparation. Undisclosed use of AI is regarded as a violation of academic integrity.

5.2. Depending on the purpose of use, information about AI must be presented in the following sections:

  • In the Introduction or Methodology: if the use of AI formed part of the research design or the method of data collection/analysis. If the article involves technical analysis, AI-generated code may be used to create fabricated data for hypothesis testing; therefore, it requires separate disclosure in the "Methodology" section.
  • In a separate "AI Declaration": placed at the end of the article (before the reference list) if AI was used for editing, translation, or technical support.

5.3. Example declaration:

"During the preparation of this manuscript, the authors used the [NAME OF TOOL/SERVICE, VERSION] for the purpose of [INDICATE THE PURPOSE: for example, stylistic editing of the English-language text / code generation for data analysis]. After using this tool, the authors carefully reviewed and edited the content and assume full and sole responsibility for the final content of the publication."

5.4. The Editorial Office reserves the right to request from the authors (in the event that reviewers raise doubts regarding the proper use of AI in the text) a list of prompts or copies of dialogues with AI in order to verify the logic of the research and prevent falsification.

5.5. The use of standard tools with integrated AI functions that do not generate substantive content does not require specific disclosure (for example: Microsoft Word spell-checking tools, grammar-checking services such as Grammarly in "correctness only" mode, citation managers, and reference list formatting managers).

 
6. POLICY FOR REVIEWERS AND EDITORS

6.1. Reviewers and members of the Editorial Board are strictly prohibited from uploading manuscripts (or fragments thereof) into generative artificial intelligence systems for analysis, review writing, or text checking. This is regarded as a direct violation of confidentiality and copyright, since most AI platforms retain data for further model training.

6.2. Peer review is a process of expert evaluation based on the experience and critical thinking of a specialist. The use of AI to generate the text of a review is unacceptable, as algorithms are incapable of adequately assessing scientific novelty and theoretical depth in the social sciences.

6.3. Screening for AI use.

  • Editors have the right to use specialized software (AI detectors) for the preliminary screening of manuscripts for signs of generated text.
  • The results of such detectors cannot serve as the sole basis for rejection of an article, but they do constitute a signal for additional examination of academic integrity.

6.4. Members of the Editorial Board may use AI only for technical tasks (for example, checking compliance of reference list formatting or translating editorial correspondence), provided that the transfer of authors' personal data or the original ideas of the manuscript is fully excluded.

6.5. Any identified use of AI by a reviewer in the evaluation of an article results in that person's removal from the journal's reviewer database and the invalidation of the review results.

 
7. CONSEQUENCES OF VIOLATIONS AND APPEAL PROCEDURE

7.1. Undisclosed use of AI (undeclared generation of text, ideas, or data falsification) is regarded as a form of academic misconduct equivalent to plagiarism or fabrication of results.

7.2. If signs of unauthorized AI use are identified during the initial screening or peer review:

  • The Editorial Office has the right to require the authors to provide explanations and the prompts used during the work.
  • If the fact of the violation is confirmed, the manuscript is rejected without the right to resubmission.
  • The Editorial Office may send an official notice of the ethical violation to the institution where the author is employed.

7.3. If AI use or AI-generated falsifications (for example, non-existent sources) are identified in an already published article, the Editorial Office initiates the retraction procedure (article retraction) in accordance with COPE protocols. At the same time, the journal must independently declare on its website which specific AI tools (plagiarism detectors with AI functions, etc.) it uses to screen manuscripts.

7.4. Authors have the right to appeal the Editorial Office's decision if they believe that accusations of AI use are unfounded:

  • The appeal must be submitted in writing within 14 days of receipt of the Editorial Office's decision.
  • Authors must provide evidence of the authenticity of the text (for example, file revision history in Word, drafts, interview records, references to real archival sources).
  • To consider disputed cases, the Editorial Office may engage an independent expert in digital ethics or the relevant academic integrity commission of an educational institution.

7.5. If an author suspects that their work was reviewed by AI, they have the right to contact the Editor and request that the review be checked using AI detectors.

7.6. The decision of the Editorial Board following consideration of the appeal is final and not subject to further review.