Delete dtsp-trust_safety_glossary_of_terms/bn directory

This commit is contained in:
Jaz-Michael King 2024-11-21 00:30:50 -05:00 committed by GitHub
parent 99440d8949
commit 7c86eba956
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
30 changed files with 0 additions and 190 deletions

View file

@ -1,3 +0,0 @@
Account Takeover
The scenario where an unauthorized user gains control of a user account, through means such as hacking, phishing or buying leaked credentials.

View file

@ -1,3 +0,0 @@
Astroturfing
Organized activity intended to create the deceptive appearance of broad, authentic grassroots support or opposition to a given cause or organization, when in reality the activity is being motivated, funded or coordinated by a single or small number of obscured sources.

View file

@ -1,5 +0,0 @@
Brigading
Coordinated, often pre-planned, mass online activity to affect a piece of content, or an account, or an entire community or message board.
Examples include coordinated upvoting or downvoting a post to affect its distribution, mass-reporting an account (usually falsely) for abuse in an attempt to cause the service provider to suspend it, or inundating a business with good or bad reviews.

View file

@ -1,5 +0,0 @@
Catfishing
The scenario where someone creates a fake persona on an online service, such as social media or a dating application, and forms a relationship with someone who believes the persona to be real.
This behavior is often associated with financial fraud and other forms of exploitation of the victim, also known as pig-slaughtering.

View file

@ -1,9 +0,0 @@
Child Sexual Exploitation/Abuse Material (CSEA/CSAM)
Imagery or videos which show a person who is a child and engaged in or is depicted as being engaged in explicit sexual activity. Child Sexual Exploitation and Abuse (CSEA) is a broader category that encompasses both material depicting child sexual abuse, other sexualised content depicting children, and includes grooming.
"Simulated Child Sexual Exploitation and Abuse Imagery" contains modified or invented depictions of children without the direct involvement of any underage subjects.
Experts, survivor groups, and the industry discourage the use of the term “Child Pornography,” which is still used as a legal term in multiple jurisdictions and international treaties.
CSAM is illegal in nearly all jurisdictions, making detection and removal of CSAM a high priority for online services.

View file

@ -1,9 +0,0 @@
Child Sexual Exploitation/Abuse Material (CSEA/CSAM)
Imagery or videos which show a person who is a child and engaged in or is depicted as being engaged in explicit sexual activity. Child Sexual Exploitation and Abuse (CSEA) is a broader category that encompasses both material depicting child sexual abuse, other sexualised content depicting children, and includes grooming.
"Simulated Child Sexual Exploitation and Abuse Imagery" contains modified or invented depictions of children without the direct involvement of any underage subjects.
Experts, survivor groups, and the industry discourage the use of the term “Child Pornography,” which is still used as a legal term in multiple jurisdictions and international treaties.
CSAM is illegal in nearly all jurisdictions, making detection and removal of CSAM a high priority for online services.

View file

@ -1,3 +0,0 @@
Content- and Conduct-Related Risk
The possibility of certain illegal, dangerous, or otherwise harmful content or behavior, including risks to human rights, which are prohibited by relevant policies and terms of service.

View file

@ -1,9 +0,0 @@
Coordinated Inauthentic Behavior
Organized online activity where an account or groups of accounts including "fake" secondary accounts (which exist solely or mainly to engage in such campaigns) act to mislead people or fraudulently elevate the popularity or visibility of content or accounts, such as mass-following an account to raise its clout.
In some cases, a single, hidden source or organization will deploy many fake accounts in order to create a false appearance of authentic and credible activity.
In other cases, people using their own, real accounts will coordinate online to achieve a misleading purpose, such as the appearance that a view or belief is more widespread than it is, or to cause wide distribution of a particular piece or type of content.
Sometimes called "platform manipulation" or "content manipulation."

View file

@ -1,9 +0,0 @@
Copyright Infringement
The use of material that is protected by copyright law (such as text, image, or video) in a way that violates the rights of the copyright holder, without the rightsholders permission and without an applicable copyright exception or limitation.
This can include infringing creation of copies, distribution, display, or public performance of a covered work, or the unauthorized creation of derivative works.
Infringement may involve primary liability (for the person who did the infringing conduct) or secondary liability for others involved in that conduct (such as a hosting company whose service hosts images posted by a user).
In the United States, a digital service hosting user-generated content receives safe harbor under Section 512 of the Copyright Act, so long as it complies with the applicable notice and takedown procedures set forth in that law.

View file

@ -1,5 +0,0 @@
Counterfeit
The unauthorized manufacture or sale of merchandise or services with an inauthentic trademark, which may have the effect of deceiving consumers into believing they are authentic.
The manufacture or sale of counterfeit goods is a form of trademark infringement, and secondary liability for this conduct is a concern for online marketplaces.

View file

@ -1,9 +0,0 @@
Cross-Platform Abuse
Instances where a bad actor or group will organize a campaign of abuse (such as harassment, trolling or disinformation) using multiple online services.
This has the effect of making it more difficult and time-consuming for affected persons to have the abusive content removed, as they will be required to contact each service separately and explain the situation.
Sometimes, the same content will simply be re-posted across multiple platforms. In other cases, bad actors will divide content or conduct such that no one service carries the full abusive content. As a result, lacking full context of the entire campaign, or if a services policy restricts its inquiry only to content or conduct that directly involves that service, a given service may determine that no violation has taken place.
Typically, such situations require research and integration of data from multiple services, and investigation of the background context of the bad actor(s) and affected person(s) to make more meaningful assessments and respond appropriately.

View file

@ -1,11 +0,0 @@
Defamation
A legal claim based on asserting something about a person that is shared with others and which causes harm to the reputation of the statements subject (the legal elements and applicable defenses vary by jurisdiction).
Defamation can be conveyed through a range of media, including visually, orally, pictorially or by text.
In the United States, supported by First Amendment jurisprudence, the burden of proof to establish defamation is on the person alleging they have been defamed.
In other jurisdictions, such as Europe, the burden of proof may be on the defendant to establish they did not commit defamation.
These differences in legal approach and levels of associated legal risk may influence the takedown processes for defamation disputes adopted by online services in various localities.

View file

@ -1,5 +0,0 @@
Dehumanisation
Describing people in ways that deny or diminish their humanity, such as comparing a given group to insects, animals or diseases.
Some experts in this area cite dehumanizing speech as a possible precursor to violence (sometimes to the point of genocide), because it may make violent action seem appropriate or justified against "nonhuman" or "less-than-human" targets.

View file

@ -1,5 +0,0 @@
Disinformation
False information that is spread intentionally and maliciously to create confusion, encourage distrust, and potentially undermine political and social institutions.
Mal-information is another category of misleading information identified by researchers, information that is based on reality but is used to inflict harm on a person, organization or country by changing the context in which the information is presented.

View file

@ -1,7 +0,0 @@
Doxxing
The act of disclosing someones personal, non-public information — such as a real name, home address, phone number or any other data that could be used to identify the individual — in an online forum or other public place without the persons consent.
Doxxing may lead to real world threats against the person whose information has been exposed, and for this reason it is often considered a form of online harassment.
Some services may also consider aggregating and disclosing publicly available information about a person in a menacing manner sufficient to constitute doxxing.

View file

@ -1,7 +0,0 @@
Farming
Content farming involves creating online content for the sole or primary purpose of attracting page views and increasing advertising revenue, rather than out of a desire to express or communicate any particular message.
Content farms often create web content based on popular user search queries (a practice known as "search engine optimization") in order to rank more highly in search engine results. The resulting "cultivated" content is generally low-quality or spammy, but can still be profitable because of the strategic use of specific keywords to manipulate search engine algorithms and lead users to navigate to a page, allowing the owner to "harvest clicks" for ad revenue.
Account farming involves creating and initially using accounts on services in apparently innocuous ways in order to build followers, age the account, and create a record, making the account appear authentic and credible, before later redirecting the account to post spam, disinformation, or other abusive content or selling it to those who intend to do so.

View file

@ -1,5 +0,0 @@
Glorification of Violence
Statements or images that celebrate past or hypothetical future acts of violence.
Some online services restrict or prohibit glorification of violence (including terrorism) on the reasoning that it may incite or intensify future acts of violence and foster a menacing or unsafe online environment, though it is challenging to distinguish glorification of a subject from other types of discussion of it.

View file

@ -1,5 +0,0 @@
Hate Speech
Abusive, hateful, or threatening content or conduct that expresses prejudice against a group or a person due to membership in a group, which may be based on legally protected characteristics, such as religion, ethnicity, nationality, race, gender identification, sexual orientation, or other characteristics.
There is no international legal definition of hate speech.

View file

@ -1,5 +0,0 @@
Impersonation
Online impersonation most often involves the creation of an account profile that uses someone elses name, image, likeness or other characteristics without that persons permission to create a false or misleading impression that the account is controlled by them.
Also known as "imposter accounts."

View file

@ -1,3 +0,0 @@
Incitement
To encourage violence or violent sentiment against a person or group.

View file

@ -1,13 +0,0 @@
# Introduction
As the Trust and Safety field grows — in significance, complexity, and number of practitioners — there is a corresponding value to ensuring a common understanding exists of key terms used by the people who work to keep users of digital services safe.
Although companies have long use combinations of people, processes, and technology to address content- and conduct-related risks, this field, following the trajectory of other technology specializations like cybersecurity and privacy, has reached a critical point where it has begun to formalize, mature, and achieve self-awareness Important discussions are happening all around the world, in homes, schools businesses, and at all levels of government, about what Trust and Safet should look like to best serve societies and their evolving relationships to the internet. But meaningful discussion has at times been limited by the lack of shared vocabulary.
Over the past year, the Digital Trust & Safety Partnership (DTSP) has bee working to develop the first industry glossary of Trust and Safety terms Following a public consultation, in which DTSP received valuable input fro stakeholders including academic organizations, industry partners, regulators and others from around the world, we are releasing the first edition of th glossary
Led by DTSP co-founder Alex Feerst, this glossary has the following objectives
1. Aid the professionalization of the field and support nascent Trust and Safety teams as they build out their operations
2. Support the adoption of agreed interpretations of critical terms use across the industry; and
3. Facilitate informed dialogue between industry, policymakers, regulators and the wider public
The goal for this first edition has been to describe how key terms are used by practitioners in industry. These are not legal definitions, and their publication does not imply that every DTSP partner company fully agrees with every term as defined here.

View file

@ -1,5 +0,0 @@
Misinformation
False information that is spread unintentionally and usually not maliciously, which may nonetheless mislead or increase likelihood of harm to persons.
Mal-information is another category of misleading information identified by researchers, information that is based on reality but is used to inflict harm on a person, organization or country by changing the context in which the information is presented.

View file

@ -1,9 +0,0 @@
Online Harassment
Unsolicited repeated behavior against another person, usually with the intent to intimidate or cause emotional distress.
Online harassment may occur over many mediums (including email, social media, and other online services).
May expand to include real world abuse, or offline activity can transition online.
Online harassment may take the form of one abuser targeting a person or group with sustained negative contact, or it may take the form of many distinct individuals targeting an individual or group.

View file

@ -1,3 +0,0 @@
Service Abuse
Use of a network, product or service in a way that violates the providers terms of service, community guidelines, or other rules, generally because it creates or increases the risk of harm to a person or group or tends to undermine the purpose, function or quality of the service.

View file

@ -1,3 +0,0 @@
Sock Puppets
Multiple, fake accounts used to create an illusion of consensus or popularity, such as by liking or reposting content in order to amplify it.

View file

@ -1,3 +0,0 @@
Spam
Unsolicited, low-quality communications, often (but not necessarily) high-volume commercial solicitations, sent through a range of electronic media, including email, messaging, and social media.

View file

@ -1,11 +0,0 @@
Synthetic Media
Content which has been generated or manipulated to appear as though based on reality, when it is in fact artificial. Also referred to as manipulated media.
Synthetic media may sometimes (but not always) be generated through algorithmic processes (such as artificial intelligence or machine learning)
A deepfake is a form of synthetic media where an image or recording is altered to misrepresent someone doing or saying something that was not done or said.
Generally, synthetic or manipulated media (including "deepfakes"), may be used within the context of abuse to deceive or cause harm to persons, such as causing them to appear to say things they never said, or perform actions which they have not (as in the case of "Synthetic Non-Consensual Exploitative Images").
Synthetic media may also be used to depict events that have not happened.

View file

@ -1,9 +0,0 @@
Terrorist and Other Violent Extremist Content (TVEC)
Content produced by or supportive of groups that identify as, or have been designated as terrorist or violent organizations, or content that promotes acts of terrorism or violent extremism.
There is no universally agreed international definition of terrorism or violent extremism and definitions for these terms vary significantly across jurisdictions.
Approaches to defining the category include actor- and behavior-based frameworks, and in order to detect and remove it, online services may rely on research and lists of terrorist or extremist organizations created by subject matter expert organizations, such as the United Nations Security Councils sanctions list.
TVEC content is increasingly a focus of lawmakers and regulators concerned with preventing its availability.

View file

@ -1,5 +0,0 @@
Troll
A user who intentionally provokes hostility or confusion online.
Distinguishing a troll, or trollish behavior, from other criticism can be challenging. A troll may make valid points, but generally does so with the intention to irritate.

View file

@ -1,7 +0,0 @@
Violent Threat
A statement or other communication that expresses an intent to inflict physical harm on a person or a group of people.
Violent threats may be direct, such as threats to kill or maim another person; they may also be indirectly implied through metaphor, analogy or other rhetoric that allows the speaker plausible deniability about their meaning or intent.
Often overlaps with incitement, such as making a public statement that a person deserves to be harmed, or encouraging others to do so