2012
DOI: 10.3745/kipstc.2012.19c.1.047
|View full text |Cite
|
Sign up to set email alerts
|

Detecting Malicious Scripts in Web Contents through Remote Code Verification

Abstract: Sharing cross-site resources has been adopted by many recent websites in the forms of service-mashup and social network services. In this change, exploitation of the new vulnerabilities increases, which includes inserting malicious codes into the interaction points between clients and services instead of attacking the websites directly. In this paper, we present a system model to identify malicious script codes in the web contents by means of a remote verification while the web contents downloaded from multipl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 6 publications
0
3
0
Order By: Relevance
“…A URL, also known as a "web address," serves as a distinct identifier for locating resources on This procedure helps to examine the structure of the webpage and looks for any suspicious code [10].…”
Section: Content Featurementioning
confidence: 99%
“…A URL, also known as a "web address," serves as a distinct identifier for locating resources on This procedure helps to examine the structure of the webpage and looks for any suspicious code [10].…”
Section: Content Featurementioning
confidence: 99%
“…These domains are used for phishing [2][3][4][5][6] (e.g., spear phishing), Command and Control (C&C) [7] and a vast set of virus and malware [8] attacks. Therefore, the ability to identify a malicious domain in advance is a massive game-changer [9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26]. A common way of identifying malicious/compromised domains is to collect information about the domain names (alphanumeric characters) and network information (such as DNS and passive DNS data).…”
Section: Introductionmentioning
confidence: 99%
“…A common way of identifying malicious/compromised domains is to collect information about the domain names (alphanumeric characters) and network information (such as DNS and passive DNS data). This information is then used to extract a set of features, according to which machine learning (ML) algorithms are trained based on a massive amount of data [11][12][13][14][15][17][18][19][20][21][22]24,[26][27][28]. A mathematical approach can also be used in various ways [16,26], such as measuring the distance between a known malicious domain name and the analyzed domain (benign or malicious) [26].…”
Section: Introductionmentioning
confidence: 99%