320 resources related to Blacklist
- Topics related to Blacklist
- IEEE Organizations related to Blacklist
- Conferences related to Blacklist
- Periodicals related to Blacklist
- Most published Xplore authors for Blacklist
ICC 2021 - IEEE International Conference on Communications
IEEE ICC is one of the two flagship IEEE conferences in the field of communications; Montreal is to host this conference in 2021. Each annual IEEE ICC conference typically attracts approximately 1,500-2,000 attendees, and will present over 1,000 research works over its duration. As well as being an opportunity to share pioneering research ideas and developments, the conference is also an excellent networking and publicity event, giving the opportunity for businesses and clients to link together, and presenting the scope for companies to publicize themselves and their products among the leaders of communications industries from all over the world.
The 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2020) will be held in Metro Toronto Convention Centre (MTCC), Toronto, Ontario, Canada. SMC 2020 is the flagship conference of the IEEE Systems, Man, and Cybernetics Society. It provides an international forum for researchers and practitioners to report most recent innovations and developments, summarize state-of-the-art, and exchange ideas and advances in all aspects of systems science and engineering, human machine systems, and cybernetics. Advances in these fields have increasing importance in the creation of intelligent environments involving technologies interacting with humans to provide an enriching experience and thereby improve quality of life. Papers related to the conference theme are solicited, including theories, methodologies, and emerging applications. Contributions to theory and practice, including but not limited to the following technical areas, are invited.
Since 1980, the IEEE Symposium on Security and Privacy has been the premier forum for presenting developments in computer security and electronic privacy, and for bringing together researchers and practitioners in the field.
IEEE Global Communications Conference (GLOBECOM) is one of the IEEE Communications Society’s two flagship conferences dedicated to driving innovation in nearly every aspect of communications. Each year, more than 2,900 scientific researchers and their management submit proposals for program sessions to be held at the annual conference. After extensive peer review, the best of the proposals are selected for the conference program, which includes technical papers, tutorials, workshops and industry sessions designed specifically to advance technologies, systems and infrastructure that are continuing to reshape the world and provide all users with access to an unprecedented spectrum of high-speed, seamless and cost-effective global telecommunications services.
IEEE INFOCOM solicits research papers describing significant and innovative researchcontributions to the field of computer and data communication networks. We invite submissionson a wide range of research topics, spanning both theoretical and systems research.
IEEE Communications Magazine was the number three most-cited journal in telecommunications and the number eighteen cited journal in electrical and electronics engineering in 2004, according to the annual Journal Citation Report (2004 edition) published by the Institute for Scientific Information. Read more at http://www.ieee.org/products/citations.html. This magazine covers all areas of communications such as lightwave telecommunications, high-speed data communications, personal communications ...
Computer, the flagship publication of the IEEE Computer Society, publishes peer-reviewed technical content that covers all aspects of computer science, computer engineering, technology, and applications. Computer is a resource that practitioners, researchers, and managers can rely on to provide timely information about current research developments, trends, best practices, and changes in the profession.
Design and analysis of algorithms, computer systems, and digital networks; methods for specifying, measuring, and modeling the performance of computers and computer systems; design of computer components, such as arithmetic units, data storage devices, and interface devices; design of reliable and testable digital devices and systems; computer networks and distributed computer systems; new computer organizations and architectures; applications of VLSI ...
The purpose of TDSC is to publish papers in dependability and security, including the joint consideration of these issues and their interplay with system performance. These areas include but are not limited to: System Design: architecture for secure and fault-tolerant systems; trusted/survivable computing; intrusion and error tolerance, detection and recovery; fault- and intrusion-tolerant middleware; firewall and network technologies; system management ...
Theory and applications of industrial electronics and control instrumentation science and engineering, including microprocessor control systems, high-power controls, process control, programmable controllers, numerical and program control systems, flow meters, and identification systems.
2009 Asia-Pacific Conference on Computational Intelligence and Industrial Applications (PACIIA), 2009
This paper describes a TDMA channel scheduling algorithm in WIA-PA network under the Blacklist technology (Blacklist-TCS). The main objective of this channel scheduling algorithm is to reduce the computation time while maximizing the utilization of the network resources, thereby improving the network throughput, reducing the transmission delay, and decreasing the number of retry. In this paper, we consider the channel ...
2016 International Conference on Recent Trends in Information Technology (ICRTIT), 2016
This paper focuses mainly on meta information on the comment stream of a peculiar message from SNS (social network services). Owing to the extreme- level popularity of SNS, there may be a increase in the comments at a high rate immediately after a social message is posted. The application model for Meta information watch word channel is a procedure to ...
2008 IEEE/ACS International Conference on Computer Systems and Applications, 2008
Phishing is an increasing web attack both in volume and techniques sophistication. Blacklists are used to resist this type of attack, but fail to make their lists up- to-date. This paper proposes a new technique and architecture for a blacklist generator that maintains an up-to-date blacklist of phishing sites. When a page claims that it belongs to a given company, ...
2011 7th International Conference on Information Assurance and Security (IAS), 2011
By using string matching, signature-based network intrusion detection systems (NIDSs) can achieve a higher accuracy and lower false alarm rate than the anomaly-based systems. But the matching process is very expensive regarding to the performance of a signature-based NIDS in which the cost is at least linear to the size of the input string and the CPU occupancy rate can ...
2012 2nd IEEE International Conference on Parallel, Distributed and Grid Computing, 2012
With the advent of replication-based approach for a distributed environment, a major coordination problem i.e., Consensus can be solved in the presence of some malicious replicas. Therefore, we attempt to design an agreement algorithm with proactive detection of such malicious replicas. The paper presents an algorithm BFT-r i.e., Byzantine Fault Tolerance with rotating coordinator. The basic idea is to rotate ...
This paper describes a TDMA channel scheduling algorithm in WIA-PA network under the Blacklist technology (Blacklist-TCS). The main objective of this channel scheduling algorithm is to reduce the computation time while maximizing the utilization of the network resources, thereby improving the network throughput, reducing the transmission delay, and decreasing the number of retry. In this paper, we consider the channel scheduling algorithm under the requirements of real-time, reliability, low energy consumption and the character of time-varying channel simultaneously, and present our algorithm as a variant of the coloring algorithm. A performance study is carried out by physical experiment. The results show that our algorithm performs better than non-blacklist algorithm in aspects of transmission success ratio, end-to-end delay, and retry exhausted count.
This paper focuses mainly on meta information on the comment stream of a peculiar message from SNS (social network services). Owing to the extreme- level popularity of SNS, there may be a increase in the comments at a high rate immediately after a social message is posted. The application model for Meta information watch word channel is a procedure to screen the client exercises in interpersonal organizations, for example blogger and gathering. The administrator attempts to add the unrefined or unkind in the blacklist table and generate list of catchphrases. These catchphrases will be maintained and updated with new unkind words based on the comments by the administrator. The foundation specialist looks for every post posted in the client or companions divider. So, when the client posts a message the foundation specialist screens the post and checks whether any unrefined or unkind word is present in the message. In this paper, we model a Probabilistic Way to Deal with Meta information for blacklisting the unkind words on SNS. Moreover, we present blacklisting algorithm that can filter the unkind words and incrementally update the catchphrases with latest incoming comments in real world. So, when the client posts a message the foundation specialist screens the post and checks whether any unrefined or unkind word is present in the message. In the event that any suitable substance is deducted the message is banned by the foundation divider channel using blacklisting algorithm. The proposed application is connected to redress of spelling blunders in questions and additionally reformulation of inquiries in web look. Furthermore, the proposed method raises a notice message to the client who forwards the unkind words to others. On the off chance if the client proceeds in the same manner the particular client is blocked for all time. From considerable experimental results and a real case demonstration, we verify that blacklisting acquire extremely precise and effective enhancing the existing techniques regarding exactness and effectiveness in distinctive settings.
Phishing is an increasing web attack both in volume and techniques sophistication. Blacklists are used to resist this type of attack, but fail to make their lists up- to-date. This paper proposes a new technique and architecture for a blacklist generator that maintains an up-to-date blacklist of phishing sites. When a page claims that it belongs to a given company, the company's name is searched in a powerful search engine like Google. The domain of the page is then compared with the domain of each of the Google's top- 10 searched results. If a matching domain is found, the page is considered as a legitimate page, and otherwise as a phishing site. Preliminary evaluation of our technique has shown an accuracy of 91% in detecting legitimate pages and 100% in detecting phishing sites.
By using string matching, signature-based network intrusion detection systems (NIDSs) can achieve a higher accuracy and lower false alarm rate than the anomaly-based systems. But the matching process is very expensive regarding to the performance of a signature-based NIDS in which the cost is at least linear to the size of the input string and the CPU occupancy rate can reach more than 80 percent in the worst case. This problem greatly limits the high performance of a signature-based NIDS in a large operational network. In this paper, we present a context-aware packet filter scheme aiming to mitigate this problem. In particular, our scheme incorporates a list technique, namely the blacklist to help filter network packets based on the confidence of the IP domains. Moreover, our scheme will adapt and update the blacklist contents by using the method of statistic-based blacklist generation according to the actual network environment. In the experiment, we implemented our scheme and showed the first experimental evaluation of its effectiveness.
With the advent of replication-based approach for a distributed environment, a major coordination problem i.e., Consensus can be solved in the presence of some malicious replicas. Therefore, we attempt to design an agreement algorithm with proactive detection of such malicious replicas. The paper presents an algorithm BFT-r i.e., Byzantine Fault Tolerance with rotating coordinator. The basic idea is to rotate the role of the primary coordinator among all the participating replicas. Undoubtedly, the assignment of each participating replica to be primary increases the possibility of a faulty replica to be selected as primary. Therefore, in order to avoid such problem, our protocol runs a mutable blacklist mechanism in which an array of previously detected faulty replicas is maintained and propagated among the different nodes so as to avoid the decision from a faulty replica. The mutable blacklist mechanism is in line with the proactive nature of the proposed protocol. The necessary correctness proof has also been presented along with the simulation analysis. The protocol is robust and exhibits better efficiency for long-lived applications/systems.
In recent years, the damage from cyber attacks caused by sophisticated malware has continuously increased. It is therefore becoming more difficult to take countermeasures using traditional approaches such as antivirus and firewall products. Against the intrusion of malware, we propose an automated countermeasure technology system named Autonomous Evolution of Defense, which mitigates the risk of actual damage by controlling the internet connection for malware, and in addition optimizes the system's operating conditions. The system takes countermeasures immediately to mitigate risk without causing disruptive effects on business. However, a graylist of malicious addresses generated by malware analysis systems contains many false-positive addresses and is very "noisy" for use in blocking access based on the list. We therefore propose a new technique for improving the accuracy of the unreliable graylist of addresses using image authentication. We report here on the implementation of our system and results of evaluation.
To understand malware behaviors, collecting and classifying malware samples is a critical issue for system security researchers. This paper aims to develop Proactive Malware Collection and Classification System (PMCCS), which consists of Proactive Malware Collection Unit (PMCU) and Automatic Malware Classification Unit (AMCU). To collect useful samples, PMCU uses P2P software actively search suspicious samples, such as software crack tools. During a 3-year period, PMCU has collected 42300 samples. To automatically classify useful samples, AMCU uploads suspicious samples to VirusTotal, a free online virus scanner. Based on VirusTotal scanning results, 11600 suspicious samples have been alerted at least once by AntiVirusWare (AVW) and 70% of these samples are Trojans and Virus tools, which are usually threatening malwares. Moreover, these suspicious 11600 samples are classified into: Blacklist with high suspiciousness; Ambitious list with moderate suspiciousness; Whitelist with low suspiciousness. Blacklist can be used to evaluate the performance of AVW based on False Negative (FN). On the other hand, Whitelist can be used to evaluate the performance of AVW based on False Positive (FP). From Blacklist and Whitelist, AMCU selects useful malwares, which triggering high counts of FN and FP against AVW.
At present malicious software or malware has increased considerably to form a serious threat to Internet infrastructure. It becomes the major source of most malicious activities on the Internet such as direct attacks, (distributed) denial-of-service (DOS) activities and scanning. Infected machines may join a botnet and can be used as remote attack tools to perform malicious activities controlled by the botmaster. In this paper we present our methodology for detecting any connection to or from malicious IP address which is expected to be command and control (C&C) server. Our detection method is based on a blacklist of malicious IPs. This blacklist is formed based on different intelligence feeds at once. We process the network traffic and match the source and destination IP addresses of each connection with IP blacklist. The intelligence feeds are automatically updated each day and the detection is in the real time.
Modern web users are exposed to a browser security threat called drive-by- download attacks that occur by simply visiting a malicious Uniform Resource Locator (URL) that embeds code to exploit web browser vulnerabilities. Many web users tend to click such URLs without considering the underlying threats. URL blacklists are an effective countermeasure to such browser-targeted attacks. URLs are frequently updated; therefore, collecting fresh malicious URLs is essential to ensure the effectiveness of a URL blacklist. We propose a framework called automatic blacklist generator (AutoBLG) that automatically identifies new malicious URLs using a given existing URL blacklist. The key idea of AutoBLG is expanding the search space of web pages while reducing the amount of URLs to be analyzed by applying several pre-filters to accelerate the process of generating blacklists. Auto-BLG comprises three primary primitives: URL expansion, URL filtration, and URL verification. Through extensive analysis using a high-performance web client honeypot, we demonstrate that AutoBLG can successfully extract new and previously unknown drive-by-download URLs.
Social networking sites such as Twitter, Facebook, Weibo etc. are extremely mainstream today. Also, the greater part of the malicious users utilize these sites to persuade legitimate users for different purposes, for example, to promote their products item, to enter their spam links, to stigmatize other persons and so forth. An ever increasing number of users are utilized these social networking sites and fake accounts on these destinations are turned into a major issue. In this paper, fake accounts are detected using blacklist instead of traditional spam words list. Blacklist is created by using topic modeling approach and keyword extraction approach. We conduct an evaluation experiment with not only 1KS - 10KN dataset but also Social Honeypot dataset. The accuracy of the traditional spam words list based approach and our blacklist based approach are compared. Decorate, a meta-learner classifier is applied for classifying fake accounts on Twitter from legitimate accounts. Our approach achieves 95.4% accuracy and true positive rate is 0.95.
No standards are currently tagged "Blacklist"