Team Work

Social Media and Misleading Information in a Democracy A Mechanism Design Approach

ABSTRAT:

In this paper, we present a resource allocation mechanism to incentivize misinformation filtering among strategic social media platforms, and thus, to indirectly prevent the spread of fake news. We consider the presence of a strategic government and private knowledge of how misinformation affects the users of the social media platforms. Our proposed mechanism strongly implements all generalized Nash equilibria for efficient filtering of mis- leading information in the induced game, with a balanced budget. We also show that for quasi-concave utilities, our mechanism implements a Pareto efficient solution.

EXISTING SYSTEM :

The average trust function hi(ai) captures the impact of filter ai on the trust on common knowledge across the users of platform i. A low value of hi(ai) implies that ai leads to low trust on common knowledge for the users of platform i, and vice versa. In practice, platform i can measure the opinions of their users through surveys [16], and thus eventually estimate the impact of filter ai using the average trust function hi(ai). Recall that, in our framework, the government is the strategic player 0 ∈ J who seeks to maximize the trust on common knowledge of the users of all social media platforms. Therefore, the government selects an action a0 ∈ A = [0, 1] that designates a lower bound which must be satisfied by the aggregate average trust of all platforms in I. To this end, we refer to the action a0 as the government’s lower bound on trust on common knowledge. Let Ni ∈ N be the total number of users of the social media platform i ∈ I. The fraction of the number of users of i with respect to the total number of users of all platforms is ni = Ni∑ l∈I Nl . The fraction ni represents the contribution of users in platform i on the average trust on common knowledge. Since∑ i∈I ni = 1, the aggregate average trust is ∑ i∈I ni · hi(ai). In our framework, the government’s role is to select the lower- bound a0 for the aggregate average trust. After the government decides on a0, each platform i ∈ I that participates in the game must select a filter ai that satisfies

EXISTING SYSTEM DISADVANTAGES:

1.LESS ACCURACY

2. LOW EFFICIENCY

PROPOSED SYSTEM :

the minimum average trust that platform i proposes to achieve through filtering;  ̃pi := ( ̃pi l : l ∈ D−i),  ̃pi ∈ R|D−i | ≥0 , is the collection of prices that platform i is willing to pay or receive per unit changes in the filters of other competing platforms (except i) and the government’s lower bound; and  ̃ai = ( ̃ai k : k ∈ Di),  ̃ai ∈ R|Di |, is the profile of filters proposed by platform i for all competing platforms (including i) and government’s lower bound. Remark 3. Note that each platform proposes a filter for themselves, denoted by  ̃ai i, in their message mi. However, platform i does not propose a price for  ̃ai i. Thus, every platform can influence their filter, but not the associated price

PROPOSED SYSTEM ADVANTAGES:

1.HIGH ACCURACY

2.HIGH EFFICIENCY

SYSTEM REQUIREMENTS
SOFTWARE REQUIREMENTS:
• Programming Language : Python
• Font End Technologies : TKInter/Web(HTML,CSS,JS)
• IDE : Jupyter/Spyder/VS Code
• Operating System : Windows 08/10

HARDWARE REQUIREMENTS:

 Processor : Core I3
 RAM Capacity : 2 GB
 Hard Disk : 250 GB
 Monitor : 15″ Color
 Mouse : 2 or 3 Button Mouse
 Key Board : Windows 08/10

For More Details of Project Document, PPT, Screenshots and Full Code
Call/WhatsApp – 9966645624
Email – info@srithub.com

Facebook
Twitter
WhatsApp
LinkedIn

Enquire Now

Leave your details here for more details.