Research Ethics and Academic Integrity in the Digital Era: A Sociological Framework for Trust, Accountability, and Global Equity
- OUS Academy in Switzerland

- قبل يومين
- 15 دقيقة قراءة
Author: L. Mercer — Affiliation: Independent Researcher
Abstract
Research ethics and academic integrity are essential to the validity of scholarship; however, the digital age has altered the methods of knowledge production, validation, and recognition. Data-intensive research, open science infrastructures, platform-based publishing, metric-driven evaluation, and generative artificial intelligence (GenAI) have augmented scholarly capacity while simultaneously introducing novel ethical risks: privacy violations, ambiguous consent, algorithmic bias, authorship conflicts, paper mills, synthetic data, image manipulation, undisclosed automation, and inconsistent governance across institutions and regions. This article constructs a sociologically informed analysis of these transformations through three interrelated frameworks: Bourdieu’s theory of fields and capitals, world-systems theory, and institutional isomorphism. They collectively elucidate the rationale behind integrity issues being not merely personal shortcomings but also foreseeable consequences of competitive pressures, disproportionate resource allocation, and legitimacy-driven policy convergence. Methodologically, this paper employs an integrative conceptual review strategy, synthesising recent peer-reviewed scholarship (with a focus on 2021–2025) regarding GenAI, integrity, data ethics, reproducibility, and misconduct detection. The analysis delineates five integrity "pressure points" (data governance, authorship/accountability, evaluation/metrics, manipulation and fabrication, and automation/GenAI) and proposes an Integrity-by-Design model that integrates ethical safeguards into workflows, infrastructures, and incentive systems. The findings stress that (1) digital ethics must shift from static compliance to ongoing risk management, (2) openness about AI assistance and research processes is becoming a key norm, (3) integrity capacity is not evenly spread out across the global knowledge system, and (4) institutions often copy policies that seem legitimate but fail without proper training, resources, and fair procedures. The article ends with specific suggestions for researchers, supervisors, institutions, and journals on how to build trust without stifling new ideas.
Introduction
A social contract gives research its credibility: scholars say that knowledge is made through honest methods, treating participants and communities with respect, being open about what happened, and being responsible for mistakes. Academic integrity upholds this contract by establishing standards for authorship, citation, data management, peer review, and the ethical dissemination of results. When integrity fails, it doesn't just hurt one paper or course; it hurts public trust, policy decisions, health outcomes, the legitimacy of education, and the reputation of whole fields.
Research is faster, more connected, and more visible than ever in the digital age. More than one team can use the same dataset again. A preprint can be sent around the world in just a few hours. You can put together a paper by having people work on different parts of it in different countries. For example, one person can collect data, another can analyse it, and a third can help with writing using digital tools. Finally, the paper can be sent to outlets that work best with the platform. These changes are good for real reasons: they make it easier for more people to learn, work together, and analyse information. But they also make ethical weaknesses that traditional integrity frameworks weren't meant to deal with.
Four digital shifts are especially consequential:
Scale and traceability: Research increasingly relies on large, reusable datasets and digital traces. Ethical risks emerge around consent, privacy, ownership, and re-identification.
Automation and assistance: Tools assist with writing, translation, coding, and analysis. GenAI expands this assistance dramatically, intensifying questions about authorship, responsibility, originality, and verification.
Platform governance and metrics: Scholarly publishing and evaluation operate through platforms and metric dashboards that shape what is rewarded, visible, and funded.
Global participation under inequality: More regions and institutions participate in global scholarship, but resources and governance infrastructures remain uneven. Standards that appear universal can impose unequal burdens.
This article argues that research ethics and integrity must be understood not only as a matter of individual virtue, but also as a field-level problem shaped by incentives, capital distributions, and institutional routines. A sociological lens helps explain why integrity breaches recur despite policies, why certain forms of misconduct become fashionable at specific times, and why some institutions struggle to implement reforms effectively.
To build that explanation, the paper uses three theoretical frameworks:
Bourdieu’s field theory to analyze competition, capital, and legitimacy in academia.
World-systems theory to examine global inequalities in knowledge production and integrity capacity.
Institutional isomorphism to explain why integrity policies converge and why convergence can become performative.
The objective is practical as well as analytical: to provide an academically rigorous, publication-ready article that also offers usable guidance for contemporary research governance.
Background and Theoretical Framework
1) Bourdieu: Academic Field, Habitus, and Capitals
Bourdieu describes social life as organized into fields—structured arenas where actors compete for resources and legitimacy (Bourdieu, 1984). The academic field is a space of competition in which prestige, funding, publication opportunities, and career security are unevenly distributed. Within this field, actors use various forms of capital:
Cultural capital: methodological competence, disciplinary knowledge, credentials, writing ability, and “research taste.”
Social capital: networks, mentorship, collaborations, editorial relationships, and gatekeeper access.
Symbolic capital: reputation, citations, awards, high-status affiliations, and recognition by elite outlets.
Economic capital: funding, lab infrastructure, software access, time, support staff, and stable contracts.
Digitalization reorganizes how these capitals operate and convert into one another. For example, economic capital can be converted into symbolic capital by paying for high-cost tools (software, analytics, AI subscriptions), data access, or professional editing. Conversely, symbolic capital can convert into economic capital through grant success and paid collaborations.
Bourdieu’s concept of habitus—internalized dispositions—helps explain why integrity training matters. Integrity is not only rule knowledge; it is embodied practice: citing reflexively, maintaining audit trails, conducting ethical reflection, and resisting shortcuts. When technology changes rapidly, habitus may lag behind field demands. Researchers may inadvertently violate best practices simply because norms have shifted faster than training and mentorship.
A Bourdieu-informed view also emphasizes that many integrity failures are linked to structural pressure: when researchers face precarious employment, high publication demands, and scarce funding, the temptation to cut corners increases. Integrity becomes harder when survival depends on output.
2) World-Systems Theory: Core–Periphery Dynamics in Knowledge Production
World-systems theory emphasizes global inequality structured through core, semi-periphery, and periphery relations (Wallerstein, 2004). Applied to academia, the “core” often controls dominant publication languages, high-prestige journals, funding streams, and standard-setting bodies. Many peripheral institutions conduct important research but face barriers: limited infrastructure, lower visibility, language constraints, and higher costs of access to databases and publication routes.
Digital tools can reduce some barriers (remote collaboration, open repositories), but they can also introduce new ones (APCs, proprietary analytics, subscription paywalls, and AI tool costs). Core-centered standards may travel as “global best practice,” yet compliance may require resources not universally available. The result can be paradoxical: the institutions most pressured to demonstrate integrity may have the least capacity to do so.
Integrity capacity—ethics committees, data protection infrastructure, misconduct investigation processes, training, and screening tools—becomes a form of capital unevenly distributed across the world-system. This inequality can shape both actual misconduct risk and perceived trustworthiness. Regions with fewer resources may be unfairly stigmatized, and researchers may experience disproportionate scrutiny.
3) Institutional Isomorphism: Why Integrity Policies Spread—And Why They Sometimes Become “Integrity Theater”
DiMaggio and Powell (1983) explain that organizations become similar through institutional isomorphism:
Coercive isomorphism: mandates from governments, funders, regulators, and accreditors.
Mimetic isomorphism: imitation under uncertainty—copying what appears successful.
Normative isomorphism: professionalization—shared training and networks producing similar norms.
Research ethics and integrity policies often diffuse through all three mechanisms. Universities adopt plagiarism rules, AI policies, research data management templates, and ethics approval procedures partly because peers do so and because these policies signal legitimacy to stakeholders. This can be beneficial: shared standards increase portability and clarity.
But isomorphism can also produce symbolic compliance—policies that exist for external legitimacy but do not translate into practice due to lack of training, staffing, enforcement, or cultural buy-in. In such cases, organizations perform governance without building capacity. This becomes especially likely in fast-changing digital environments where uncertainty is high and copying seems safer than experimentation.
Method
Research Design
This article uses an integrative conceptual review and theory-driven synthesis. Rather than reporting a single dataset, it compiles and interprets current research on digital-era integrity challenges and governance responses, grounding the analysis in sociological theory.
Evidence Base
The synthesis prioritizes peer-reviewed literature from 2021–2025 on GenAI and integrity, data governance, and misconduct detection, supplemented by classic theoretical sources. The intent is to ensure the discussion reflects contemporary realities while keeping a robust conceptual foundation.
Analytical Steps
Identify major digital-era integrity challenges and classify them into “pressure points.”
Explain why these challenges emerge and persist using the three theoretical lenses.
Derive a practical governance model (Integrity-by-Design) and associated recommendations.
Analysis: Five Integrity Pressure Points in the Digital Era
Pressure Point 1: Data Ethics Under Conditions of Abundance, Reuse, and Surveillance
Digital research increasingly depends on data that were not originally collected for research purposes: platform behavior traces, administrative records, learning analytics, mobile sensors, and large-scale archives. This shift raises ethical dilemmas that classical “human subjects” frameworks do not fully resolve.
Key ethical problems include:
Consent ambiguity: People may not understand that their digital traces can become research data. Even when consent exists, secondary use may exceed original expectations.
Re-identification risk: “Anonymous” data can be re-identified through linkage with other datasets. Risk rises as datasets grow.
Group harms: Research may harm communities even when individuals are not identifiable (stigmatizing narratives, predictive profiling, biased interventions).
Ownership and governance: Who owns platform data? Who can grant permission? Who is accountable for misuse?
Data security and stewardship: Ethical data practice requires secure storage, controlled access, audit logs, and documentation—capacity that varies widely.
In open science environments, these tensions intensify. Openness supports reproducibility and accountability, yet not all data can be fully open without risk. Recent scholarship highlights the need for balanced models: open methods and transparent analysis alongside controlled access for sensitive data (Lvovs et al., 2025).
Bourdieu’s explanation:
Data stewardship becomes symbolic capital—institutions signal modernity through “open data” commitments. Yet true openness demands economic and cultural capital: secure repositories, trained staff, and expertise in privacy-preserving techniques. Those without resources may mimic openness superficially, inadvertently increasing participant risk.
World-systems explanation:
Core institutions often define openness norms. Peripheral institutions may face pressure to comply with standards that assume stable legal frameworks and infrastructure. Without capacity-building, compliance becomes unequal: some can share responsibly; others are forced into risk or exclusion.
Isomorphism explanation:
Data management plans and ethics templates spread rapidly. But where implementation is weak, governance becomes performative: boxes are checked, while real risk management remains minimal.
Integrity implication:
Digital-era research ethics must treat data governance as ongoing stewardship, not a single approval event.
Pressure Point 2: Authorship, Contribution, and Accountability in Distributed and AI-Assisted Production
Authorship historically performed two functions: allocating credit and allocating responsibility. Digital workflows disrupt both.
New realities include:
Distributed labor: Research can be assembled across locations and roles. Contribution becomes modular—data, code, writing, visualization, and editing may be separated.
Ghost and guest authorship: Digital outsourcing can hide unacknowledged contributors; status hierarchies can encourage adding prestigious names without real work.
Paper mills and contract cheating ecosystems: Digital markets can supply fabricated manuscripts or data, challenging the authenticity of outputs.
GenAI assistance: GenAI can draft text, summarize literature, propose code, and translate writing. Used ethically, it can reduce barriers; used invisibly, it can blur accountability.
Recent work emphasizes that the core integrity response to GenAI is not necessarily prohibition, but transparency and responsibility: researchers must disclose appropriate tool use and verify accuracy rather than outsourcing judgment to automation (Yusuf, Pervin, & Román-González, 2024; Bittle & El-Gayar, 2025).
Bourdieu’s explanation:
Authorship is symbolic capital. If publication quantity is rewarded, tools that accelerate writing become competitive advantages. This creates a temptation structure: small undisclosed assistance can escalate into hidden outsourcing. Tool literacy and access become new forms of cultural and economic capital.
World-systems explanation:
Language dominance matters. Researchers outside dominant-language settings face pressure to publish in prestigious outlets. Ethical AI use could reduce language inequality, yet it may also trigger suspicion: fluent writing can be misread as dishonesty. This can result in unequal scrutiny and symbolic injustice.
Isomorphism explanation:
Many organizations copy policy statements about authorship and AI, but enforcement is often unclear. Without operational definitions (what counts as assistance, what must be disclosed, what is prohibited), rules remain symbolic.
Integrity implication:
Institutions should move from vague authorship norms to concrete contribution transparency (roles, workflows, and accountability) and clear AI disclosure expectations proportionate to risk.
Pressure Point 3: Evaluation, Metrics, and the Acceleration of “Output Over Quality”
Digital platforms have made scholarly performance highly measurable. Citation dashboards, download counts, and publication metrics are now embedded in hiring, promotion, funding, and institutional branding. While metrics can provide signals, they also reshape behavior.
Integrity risks intensified by metric-driven systems include:
Salami slicing: splitting results into multiple small papers to increase counts.
Citation gaming: mutual citation rings, coercive citation, and strategic referencing.
Neglect of replication: careful verification is undervalued relative to novelty and speed.
Overclaiming and hype: findings are overstated to attract attention and citations.
Shortcut incentives: time pressures encourage reduced documentation, weak peer review engagement, and sometimes misconduct.
Bourdieu’s explanation:
Metrics convert symbolic capital into numbers, intensifying competition. When the field rewards measurable outputs, the rational strategy becomes maximizing metric performance. Integrity becomes fragile when survival depends on speed and visibility.
World-systems explanation:
Global metrics often privilege core publication venues and dominant languages. Peripheral scholars may face stronger pressure to publish in core outlets while lacking support. This can create a double bind: meet core standards without core resources.
Isomorphism explanation:
Institutions imitate metric-centric evaluation because it appears objective and modern. Yet this imitation can produce perverse incentives. Integrity reform requires evaluation systems that reward transparency, reproducibility, and responsible data stewardship—not only output quantity.
Integrity implication:
Digital integrity is inseparable from evaluation reform. If incentives reward speed and volume, integrity policies alone will not work.
Pressure Point 4: Manipulation, Fabrication, and the Expanding Frontier of Digital Forgery
Integrity threats now extend far beyond text plagiarism. Digital manipulation can involve images, datasets, code, and even the fabrication of entire research narratives.
Contemporary concerns include:
Image manipulation: altering figures, microscopy images, gels, plots, or visual evidence.
Synthetic or fabricated data: generating plausible datasets that evade superficial review.
Automated paper production: assembling manuscripts through templates and AI tools.
Detection arms race: as tools for fabrication improve, detection becomes harder.
Recent technical reviews discuss deep-learning-based methods for detecting digital image manipulation and highlight the complexity of distinguishing benign image processing from deceptive alteration (Duszejko et al., 2025). At the same time, ethical discussions warn against overreliance on automated screening without due process, given the risk of false positives and reputational harm (Hosseini et al., 2025).
Bourdieu’s explanation:
Fraud can be understood as an extreme response to competitive pressure when symbolic rewards are highly concentrated. When the perceived payoff of “success” is high and the risk of detection seems low, fabrication becomes a calculated strategy for some actors.
World-systems explanation:
Detection and enforcement capacity is uneven. Wealthy institutions and major publishers can afford screening tools and integrity offices; others cannot. This asymmetry can both increase vulnerability and intensify stigma against less-resourced regions.
Isomorphism explanation:
Organizations adopt screening technologies because peers do so. But if screening becomes punitive without transparent standards and appeal mechanisms, it can erode trust. Ethical integrity systems need procedural fairness.
Integrity implication:
Digital-era integrity must include evidence governance: audit trails, raw data retention where appropriate, transparent image processing standards, and fair investigative processes.
Pressure Point 5: GenAI, Epistemic Risk, and the Problem of “Plausible but False” Scholarship
GenAI has become the defining integrity challenge of the moment because it can generate fluent, convincing text rapidly. The central risk is not just plagiarism; it is epistemic uncertainty—the spread of plausible errors.
Major GenAI-related integrity risks include:
Hallucinated facts and citations: AI-generated text may confidently present false information or nonexistent sources.
Undisclosed automation: hidden AI assistance can blur accountability and misrepresent effort.
Style substitution: writing becomes detached from understanding, weakening scholarly responsibility.
Data leakage: entering sensitive data into third-party tools can violate confidentiality obligations.
Assessment disruption: in education, GenAI challenges how authenticity of student work is evaluated and how learning is measured.
Recent research argues for rethinking integrity definitions to address GenAI’s impact on authenticity and accountability (Laflamme, 2025). Systematic review work suggests institutions are experimenting with policy approaches, emphasizing disclosure, assessment redesign, and AI literacy rather than simple bans (Bittle & El-Gayar, 2025).
Bourdieu’s explanation:
GenAI becomes a “capital multiplier.” Those with access and skill can produce more polished outputs faster. Cultural capital now includes AI literacy: verification skills, prompt discipline, and responsible delegation.
World-systems explanation:
GenAI can reduce language barriers and expand participation, but costs and access inequalities can reinforce stratification. Training data biases can also reproduce core-centric knowledge norms.
Isomorphism explanation:
Under uncertainty, institutions copy AI policies. Some overreact with bans; others underreact with permissive ambiguity. Copying without implementation details yields inconsistent practice and confusion.
Integrity implication:
GenAI governance must prioritize verification, documentation, proportional disclosure, and human accountability.
Findings: What a Sociological Lens Reveals
Finding 1: Integrity Failures Are Often Rational Responses to Field Pressures
From a moral standpoint, misconduct is wrong. From a sociological standpoint, it often emerges where incentives reward outcomes and punish delay. When employment is precarious and performance is measured by outputs, shortcuts become attractive. This does not excuse misconduct, but it explains why ethics training alone is insufficient.
Institutional lesson: Reduce high-risk incentive structures and strengthen supportive integrity infrastructures.
Finding 2: Transparency Is Becoming the Central Integrity Norm
Across data stewardship, authorship, peer review, and AI assistance, transparency is the shared solution. The digital era increases complexity, and complexity requires documentation.
Institutional lesson: Normalize transparency as a professional standard, not as an admission of wrongdoing.
Finding 3: Integrity Capacity Is Unevenly Distributed Globally
World-systems dynamics reveal that ethical compliance requires resources: secure systems, trained staff, legal guidance, and time. Without capacity-building, standards can become exclusionary.
Institutional lesson: Ethical governance must include equity—shared infrastructures, training, and realistic compliance expectations.
Finding 4: Policy Convergence Can Create Legitimacy Without Effectiveness
Isomorphism explains why organizations adopt policies quickly. Yet policies alone do not change practice. Without training, enforcement, and procedural fairness, governance becomes performative.
Institutional lesson: Measure integrity by practice indicators and outcomes, not by policy presence.
Finding 5: GenAI Requires “Integrity Literacy,” Not Just Detection
Detection tools are imperfect and can harm trust if misused. The more sustainable response is literacy: verification routines, documentation, and ethical tool-use habits.
Institutional lesson: Build AI integrity competence through training and workflow design.
Toward an Integrity-by-Design Model
Concept
Integrity-by-Design means embedding ethical safeguards into the research lifecycle so that responsible conduct is the default and misconduct is harder to perform undetected. It treats integrity as a system property, not only an individual virtue.
Pillar A: Workflow Accountability (From “Who Authored?” to “How Was This Built?”)
Practices:
Contribution mapping using role categories (data curation, analysis, writing, supervision, project administration).
Documented decision logs for major methodological choices.
Version control for code and manuscripts where feasible.
Internal pre-submission checks for consistency between claims, data, and analysis.
Why it works:
It increases auditability and clarifies responsibility, reducing ambiguity that enables misconduct or disputes.
Pillar B: Responsible Data Stewardship (Ethics as Infrastructure)
Practices:
Data classification (public, restricted, sensitive) with clear handling rules.
Consent language that anticipates reuse when appropriate.
Secure storage, access controls, and retention policies.
Controlled access sharing models for sensitive datasets.
Documentation standards: metadata, codebooks, and provenance logs.
Why it works:
It prevents harm to participants and communities while supporting reproducibility and responsible openness.
Pillar C: AI and Automation Governance (Proportionate Rules, Human Accountability)
Principles:
Human responsibility is non-transferable: authors remain accountable for claims and citations.
Disclosure is proportionate: disclose AI use meaningfully (e.g., drafting, translation, code assistance) without forcing excessive ritual detail.
Verification is mandatory: AI outputs require checking against sources, datasets, and methods.
Confidentiality is protected: avoid placing sensitive data into insecure tools.
Operational practices:
AI use statements for manuscripts when AI contributed to drafting or analysis support.
Checklists for citation verification and factual validation.
Training in prompt discipline and error awareness.
Why it works:
It reframes AI from a cheating threat to a governance challenge solved through transparency and verification.
Pillar D: Ethical Evaluation and Incentives (Reward What Integrity Requires)
Practices:
Recognize open methods, reproducible code, and robust data documentation in promotion and funding decisions.
Reduce overreliance on single metrics.
Value replication, null results, and careful scholarship.
Encourage responsible collaboration and mentorship.
Why it works:
It aligns incentives with integrity, reducing pressure-driven misconduct.
Pillar E: Fair Procedures and Learning Systems (Integrity With Due Process)
Practices:
Clear misconduct definitions and investigative thresholds.
Human oversight of screening tools; no automatic accusations.
Right to respond and transparent appeal processes.
Institutional learning: anonymized case lessons, updated training, and revised procedures.
Why it works:
It protects individuals from unjust harm while strengthening collective trust and competence.
Practical Recommendations (Ready-to-Use)
For Researchers
Treat documentation as a core research method: maintain versioned notes, data provenance, and analysis logs.
Use AI tools ethically: verify claims, avoid fabricated citations, and disclose significant assistance.
Protect sensitive data: do not upload confidential information to uncontrolled systems.
Clarify authorship early: document roles and expectations at project start.
Resist metric temptation: prioritize accurate reporting and reproducibility over speed.
For Supervisors and Mentors
Teach integrity as practice: model documentation habits and verification routines.
Discuss AI openly: define acceptable tool use and disclosure norms within the research group.
Provide feedback on research process, not only results.
Support early-career researchers facing pressure: reduce hidden incentive structures that encourage shortcuts.
For Institutions
Resource integrity: invest in training, data stewardship support, and fair investigative capacity.
Build AI integrity literacy programs for staff and students.
Reform evaluation: reward transparency, quality, and reproducibility.
Avoid integrity theater: ensure policies have implementation plans, staff ownership, and measurable outcomes.
Promote equity: provide shared infrastructure and support for teams with fewer resources.
For Journals and Publishers
Require meaningful transparency: contributor statements, data availability explanations, and AI use disclosure where relevant.
Use screening tools responsibly with human oversight and due process.
Encourage reproducibility: methods clarity, code sharing where possible, and robust reporting standards.
Avoid metric-driven editorial bias that over-rewards novelty at the expense of rigor.
Conclusion
Rules alone won't be enough to protect research ethics and academic integrity in the digital age. Platform incentives, automation, fast sharing, and global inequality all have an effect on the digital research environment. A sociological perspective elucidates the underlying mechanisms: Bourdieu elucidates integrity pressures as outcomes of competition for capital and legitimacy; world-systems theory exposes disparities in integrity capacity and recognition; institutional isomorphism accounts for the rapid convergence of policies, which may occasionally remain performative.
In the digital age, we need to move from compliance checklists to Integrity-by-Design. This means making accountability, data stewardship, AI governance, incentive alignment, and fair procedures a part of everyday research. The goal is not to stop innovation, but to make sure that it leads to reliable knowledge. Integrity is the foundation of credibility. Without it, research is just noise; with it, research is still a public good.
Hashtags
#ResearchEthics #AcademicIntegrity #DigitalEra #ResponsibleAI #DataStewardship #OpenScience #TrustInScholarship
References
Bittle, K., & El-Gayar, O. (2025). Generative AI and Academic Integrity in Higher Education: A Systematic Review and Research Agenda. Information, 16(4), 296.
Bourdieu, P. (1977). Outline of a Theory of Practice. Cambridge University Press.
Bourdieu, P. (1984). Distinction: A Social Critique of the Judgement of Taste. Harvard University Press.
DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48(2), 147–160.
Duszejko, P., et al. (2025). Detection of manipulations in digital images: A review of passive and active methods utilizing deep learning. Applied Sciences, 15(2), 881.
Hosseini, M., et al. (2025). Guidance needed for using artificial intelligence to screen for research misconduct, including plagiarism and data or image manipulation. Journal of Medical Ethics (advance online publication).
Laflamme, A. S. (2025). Redefining academic integrity in the age of generative artificial intelligence. Journal of Scholarship of Teaching and Learning (advance online publication).
Lvovs, D., et al. (2025). Balancing ethical data sharing and open science for responsible biomedical data science. Cell Reports Methods, 5, Article identifier as published.
Resnik, D. B. (2020). The Ethics of Science: An Introduction. Routledge.
Wallerstein, I. (2004). World-Systems Analysis: An Introduction. Duke University Press.
Yusuf, A., Pervin, N., & Román-González, M. (2024). Generative AI and the future of higher education: A threat to academic integrity or reformation? Evidence from multicultural perspectives. International Journal of Educational Technology in Higher Education, 21, 21.
Zwart, H. (2022). Digitalization, integrity, and responsible research: Rethinking governance in the era of data-intensive science. Science and Engineering Ethics.

تعليقات