Heritage,
Год журнала:
2024,
Номер
7(10), С. 5428 - 5445
Опубликована: Сен. 30, 2024
The
introduction
of
generative
AI
has
the
potential
to
radically
transform
various
fields
research,
including
archaeology.
This
study
explores
AI,
specifically
ChatGPT,
in
developing
a
computer
application
for
analyzing
aerial
and
satellite
images
detect
archaeological
anomalies.
main
focus
was
not
on
itself
but
evaluating
ChatGPT’s
effectiveness
as
an
IT
assistant
humanistic
researchers.
Starting
with
simple
prompt
analyze
multispectral
orthophoto,
developed
through
successive
iterations,
improved
continuous
interactions
ChatGPT.
Various
technical
methodological
challenges
were
addressed,
leading
creation
functional
multiple
features,
analysis
methods
tools.
process
demonstrated
how
use
large
language
models
(LLMs)
can
break
down
barriers
between
humanities
science
disciplines,
enabling
researchers
without
programming
skills
develop
complex
applications
short
time.
ACM Computing Surveys,
Год журнала:
2025,
Номер
unknown
Опубликована: Янв. 13, 2025
Large
language
models
(LLMs)
have
demonstrated
extraordinary
capabilities
and
contributed
to
multiple
fields,
such
as
generating
summarizing
text,
translation,
question-answering.
Nowadays,
LLMs
become
very
popular
tools
in
natural
processing
(NLP)
tasks,
with
the
capability
analyze
complicated
linguistic
patterns
provide
relevant
responses
depending
on
context.
While
offering
significant
advantages,
these
are
also
vulnerable
security
privacy
attacks,
jailbreaking
data
poisoning
personally
identifiable
information
(PII)
leakage
attacks.
This
survey
provides
a
thorough
review
of
challenges
LLMs,
along
application-based
risks
various
domains,
transportation,
education,
healthcare.
We
assess
extent
LLM
vulnerabilities,
investigate
emerging
attacks
against
potential
defense
mechanisms.
Additionally,
outlines
existing
research
gaps
highlights
future
directions.
Heritage,
Год журнала:
2024,
Номер
7(3), С. 1453 - 1471
Опубликована: Март 11, 2024
Generative
artificial
intelligence
(genAI)
language
models
have
become
firmly
embedded
in
public
consciousness.
Their
abilities
to
extract
and
summarise
information
from
a
wide
range
of
sources
their
training
data
attracted
the
attention
many
scholars.
This
paper
examines
how
four
genAI
large
(ChatGPT,
GPT4,
DeepAI,
Google
Bard)
responded
prompts,
asking
(i)
whether
would
affect
cultural
heritage
will
be
managed
future
(with
examples
requested)
(ii)
what
dangers
might
emerge
when
relying
heavily
on
guide
professionals
actions.
The
systems
provided
examples,
commonly
drawing
extending
status
quo.
Without
doubt,
AI
tools
revolutionise
execution
repetitive
mundane
tasks,
such
as
classification
some
classes
artifacts,
or
allow
for
predictive
modelling
decay
objects.
Important
were
used
assess
purported
power
extract,
aggregate,
synthesize
volumes
multiple
sources,
well
ability
recognise
patterns
connections
that
people
may
miss.
An
inherent
risk
‘results’
presented
by
is
are
‘artifacts’
system
rather
than
being
genuine.
Since
present
unable
purposively
generate
creative
innovative
thoughts,
it
left
reader
determine
any
text
out
ordinary
meaningful
nonsensical.
Additional
risks
identified
use
without
required
level
literacy
overreliance
lead
deskilling
general
practitioners.
Publications,
Год журнала:
2025,
Номер
13(1), С. 12 - 12
Опубликована: Март 12, 2025
The
public
release
of
ChatGPT
in
late
2022
has
resulted
considerable
publicity
and
led
to
widespread
discussion
the
usefulness
capabilities
generative
Artificial
intelligence
(Ai)
language
models.
Its
ability
extract
summarise
data
from
textual
sources
present
them
as
human-like
contextual
responses
makes
it
an
eminently
suitable
tool
answer
questions
users
might
ask.
Expanding
on
a
previous
analysis
ChatGPT3.5,
this
paper
tested
what
archaeological
literature
appears
have
been
included
training
phase
three
recent
Ai
models:
ChatGPT4o,
ScholarGPT,
DeepSeek
R1.
While
ChatGPT3.5
offered
seemingly
pertinent
references,
large
percentage
proved
be
fictitious.
more
model
which
is
purportedly
tailored
towards
academic
needs,
performed
much
better,
still
high
rate
fictitious
references
compared
general
models
ChatGPT4o
DeepSeek.
Using
‘cloze’
make
inferences
‘memorized’
by
model,
was
unable
prove
that
any
four
genAi
had
perused
full
texts
genuine
references.
It
can
shown
all
provided
other
OpenAi
models,
well
DeepSeek,
were
found
genuine,
also
cited
Wikipedia
pages.
This
strongly
indicates
source
base
for
at
least
some,
if
not
most,
those
pages
thus
represents,
best,
third-hand
material.
significant
implications
relation
quality
available
shape
their
answers.
are
discussed.
Generative
artificial
intelligence
(AI),
in
particular
large
language
models
such
as
ChatGPT
have
reached
public
consciousness
with
a
wide-ranging
discussion
of
their
capabilities
and
suitability
for
various
professions.
The
extant
literature
on
the
ethics
generative
AI
revolves
around
its
usage
application,
rather
than
ethical
framework
responses
provided.
In
education
sector,
concerns
been
raised
regard
to
ability
these
aid
student
assignment
writing
potentially
concomitant
misconduct
work
is
submitted
assessment.
Based
series
‘conversations’
multiple
replicates,
using
range
prompts,
this
paper
examines
capability
provide
advice
how
cheat
assessments.
Since
release
November
2022,
numerous
authors
developed
‘jailbreaking’
techniques
trick
into
answering
questions
ways
other
default
mode.
While
mode
activates
safety
awareness
mechanism
that
prevents
from
providing
unethical
advice,
modes
partially
or
fully
bypass
elicit
answers
are
outside
expected
boundaries.
provided
wide
suggestions
best
university
assignments,
some
solutions
common
most
replicates
(‘plausible
deniability,’
adjustment
contract
written
text’).
Some
ChatGPT’s
avoid
cheating
being
detected
were
cunning,
if
not
slightly
devious.
implications
findings
discussed.
2022 ACM Conference on Fairness, Accountability, and Transparency,
Год журнала:
2024,
Номер
2020, С. 660 - 686
Опубликована: Июнь 3, 2024
Large
language
models
(LLMs)
are
increasingly
appearing
in
consumer-facing
products.
To
prevent
problematic
use,
the
organizations
behind
these
systems
have
put
content
moderation
guardrails
place
that
from
generating
they
consider
harmful.
However,
most
of
enforcement
standards
and
processes
opaque.
Although
play
a
major
role
user
experience
tools,
automated
tools
received
relatively
less
attention
than
other
aspects
models.
This
study
undertakes
an
algorithm
audit
OpenAI's
ChatGPT
with
goal
better
understanding
its
their
potential
biases.
evaluate
performance
on
broad
cultural
range
content,
we
generate
dataset
100
popular
United
States
television
shows
one
to
three
synopses
for
each
episode
first
season
show
(3,309
total
synopses).
We
probe
GPT's
endpoint
(ME)
identify
violating
both
themselves,
own
outputs
when
asked
script
based
synopsis,
also
comparing
ME
81
real
scripts
same
TV
(269,578
outputs).
Our
findings
large
number
GPT-generated
flag
as
violations
(about
18%
GPT
69%
ones).
Using
metadata,
find
maturity
ratings,
well
certain
genres
(Animation,
Crime,
Fantasy,
others)
statistically
significantly
related
script's
likelihood
flagging.
conclude
by
discussing
implications
LLM
self-censorship
directions
future
research
procedures.
Journal of Infrastructure Policy and Development,
Год журнала:
2024,
Номер
8(7), С. 4783 - 4783
Опубликована: Июль 29, 2024
Academic
integrity
has
been
at
the
centre
of
discussion
adoption
Chat
GPT
by
academics
in
their
research.
This
study
explored
how
academic
mitigates
desire
to
use
ChatGPT
tasks
EFL
Pre-service
teachers,
consideration
time
factor,
perceived
peer
influence,
self-effectiveness,
and
self-esteem.
The
utilized
web-based
questionnaires
elicit
data
from
300
teachers
across
educational
fields
drawn
different
schools
world.
Analysis
was
conducted
using
relevant
statistical
measures
test
projected
four
hypotheses.
findings
provide
evidence
support
Hypothesis
1,
with
a
statistically
significant
path
coefficient
(β)
0.442,
t-value
3.728,
p-value
0.000.
hypothesis
acceptance
implies
that
when
improves,
impact
time-saving
aspect
Across
decreases.
suggests
who
have
firm
dedication
honesty
are
less
influenced
tempting
appeal
ChatGPT’s
features,
highlighting
ethical
factors
influence
decision-making.
also
for
2,
indicating
substantial
inverse
relationship
0.369,
5.629,
0.001.
These
indicate
stronger
adherence
is
linked
diminished
effect
colleagues
on
choice
tasks.
results
suggest
serves
as
protective
barrier
against
exogenous
pressures
or
influences
it
comes
embracing
cutting-edge
technology.
However,
general,
these
revealed
there
negative
association
between
academically
related
(e.g.,
sense
pressure,
language
self-confidence,
competence),
well
an
attitude
toward
commitment
towards
integrity.
Research Square (Research Square),
Год журнала:
2023,
Номер
unknown
Опубликована: Сен. 19, 2023
Abstract
The
generative
artificial
intelligence
(AI)
language
model
ChatGPT
is
programmed
not
to
provide
answers
that
are
unethical
or
may
cause
harm
people.
By
setting
up
user-created
role-plays
designed
alter
ChatGPT’s
persona,
can
be
prompted
answer
with
inverted
moral
valence
supplying
answers.
In
this
mode
was
asked
suggestions
on
how
avoid
being
detected
when
commissioning
and
submitting
contract
written
assignments.
We
conducted
30
iterations
of
the
task,
we
examine
types
suggested
strategies
their
likelihood
avoiding
detection
by
markers,
or,
if
detected,
escaping
a
successful
investigation
academic
misconduct.
Suggestions
made
ranged
from
communications
writers
general
use
writing
services
content
blending
innovative
distraction
techniques.
While
majority
has
low
chance
detection,
recommendations
related
obscuring
plagiarism
as
well
techniques
have
higher
probability
remaining
undetected.
conclude
used
success
brainstorming
tool
cheating
advice,
but
its
depends
vigilance
assignment
markers
student’s
ability
distinguish
between
genuinely
viable
options
those
appear
workable
not.
some
cases
advice
given
would
actually
decrease
Praxis der Rechtspsychologie,
Год журнала:
2024,
Номер
34(1), С. 89 - 102
Опубликована: Июнь 1, 2024
Zusammenfassung:
Der
Beitrag
untersucht
Möglichkeiten
und
Grenzen
der
Vorbereitung
einer
aussagenden
Person
auf
eine
aussagepsychologische
Begutachtung
mittels
ChatGPT,
einem
populären
Chatbot
Basis
künstlicher
Intelligenz.
Dafür
wurden
allgemeine
spezifische
Fragen
an
ChatGPT-3.5
gestellt,
um
dessen
Wissensstand
über
die
Begutachtung,
glaubhafte
Darstellung
im
Allgemeinen,
wiederentdeckte
Erinnerungen,
Übertragungswissen
erfolgreiches
Lügen
zu
ermitteln.
Die
Ergebnisse
zeigen,
dass
zwar
grundlegende
Kenntnisse
verfügt.
Beispielsweise
betont
Bedeutung
von
konsistenten
detaillierten
Aussagen.
Über
ein
tieferes
Sachverständnis
verfügt
hingegen
nicht.
Insbesondere
bei
glaubhaften
wiederentdeckten
Erinnerungen
neigt
vereinfachten
bis
inkorrekten
Es
kann
geschlussfolgert
werden,
für
oberflächliche
Informationsbeschaffung
zur
genutzt
werden
kann.
Tiefe
oder
gar
kritische
Einblicke
in
Begutachtungsmethodik
jedoch
nicht
geben.
Implikationen
Limitationen
des
Beitrags
abschließend
diskutiert.
Publications,
Год журнала:
2023,
Номер
11(3), С. 45 - 45
Опубликована: Сен. 21, 2023
The
recent
public
release
of
the
generative
AI
language
model
ChatGPT
has
captured
imagination
and
resulted
in
a
rapid
uptake
widespread
experimentation
by
general
academia
alike.
number
academic
publications
focusing
on
capabilities
as
well
practical
ethical
implications
been
growing
exponentially.
One
concerns
with
this
unprecedented
growth
scholarship
related
to
AI,
particular,
ChatGPT,
is
that,
most
cases,
raw
data,
which
text
original
‘conversations,’
have
not
made
available
audience
papers
thus
cannot
be
drawn
assess
veracity
arguments
conclusions
therefrom.
This
paper
provides
protocol
for
documentation
archiving
these
data.