Skip to main content

Posts

Showing posts from July, 2009

Financial

Financial economics is primarily concerned with building models to derive testable or policy implications from acceptable assumptions. Some fundamental ideas in financial economics are portfolio theory, the Capital Asset Pricing Model. Portfolio theory studies how investors should balance risk and return when investing in many assets or securities. The Capital Asset Pricing Model describes how markets should set the prices of assets in relation to how risky they are. The Modigliani-Miller Theorem describes conditions under which corporate financing decisions are irrelevant for value, and acts as a benchmark for evaluating the effects of factors outside the model that do affect value.

A common assumption is that financial decision makers act rationally (see Homo economicus; efficient market hypothesis). However, recently, researchers in experimental economics and experimental finance have challenged this assumption empirically. They are also challenged - theoretically - by behavioral fi…

Computer Science

Computer science (or computing science) is the study of the theoretical foundations of information and computation, and of practical techniques for their implementation and application in computer systems.[1][2][3] It is frequently described as the systematic study of algorithmic processes that describe and transform information. According to Peter J. Denning, the fundamental question underlying computer science is, "What can be (efficiently) automated?"[4] Computer science has many sub-fields; some, such as computer graphics, emphasize the computation of specific results, while others, such as computational complexity theory, study the properties of computational problems. Still others focus on the challenges in implementing computations. For example, programming language theory studies approaches to describing computations, while computer programming applies specific programming languages to solve specific computational problems, and human-computer interaction focuses on t…

Assignment_Lanka

Assignment Lanka Students may not having age limit and it is try to do something from beyond the normal thing environment One of our course objectives is to become a better writer in a about the information technology, by the way there is so many blogs available with having several topics, But in here we concern to give some thing the blog viewer. Form the binging we try to introduce some sample assignment with IT projects. And Point of View shows Nature projects, environmental projects.


Computer Science Assignment, refers to tasks assigned to students by their teachers to be completed mostly outside of class, and derives its name from the fact that most students do the majority of such work at home. Common homework assignments may include a quantity or period of reading to be performed, writing or typing to be completed, problems to be solved, a school project to be built (such as a diorama or display), or other skills to be practiced. Other than to than in here we discuss about Finan…

Randomization

A last interesting observation is randomization (for example in the routing protocol
or in the nodes’ location). Indeed, studies show that randomization has a
big impact on attacks as an attacker cannot deterministically attack the network
any longer. Unfortunately, it also considerably slows the network down. P2P
networks often have scalability problems and anything which slows performance
down is generally avoided. This is probably the main reason why randomization
is avoided in P2P networks.

based Systems

This condemnation of all hierarchical structures also makes us reject reputationbased
systems. Nodes in such systems have a “reputation” determined by all
other nodes [17]. Typically, each node will publish a list of the nodes it trusts,
making it impossible for a node to change its own reputation by itself. Before
initiating a download, a node will first check the reputation of the node it wants
to download from and then decide whether to pursue or not. In a sense, the
higher the reputation, the more importance a node has.
While this might seem like a good direction, we will argue that, as it introduces a notion of hierarchy, this approach constitutes a weakness. The problem is
that nodes with a bigger reputation have more powers than other nodes. Other
nodes will tend to trust them more and they are able to influence other nodes’
reputation more effectively. An attacker simply needs a little patience and wait
for one of his nodes to gain sufficient trust in order to launch his attack. If
the at…

First conclusions 4.1 Only Pure P2P!

We have now been introduced to P2P networks and have observed most possible
attacks. So what are the first conclusions we can make at this point?
First of all, when designing a P2P network, it is of utmost importance not to
use a mixed P2P model. As soon as we enter any kind of notion of hierarchy,
we automatically present a target. If a node is more important, more trusted
or better connected than other nodes, then an attacker can use this to his advantage.
This permits malicious users to attack the the network strategically,
which is far more dangerous. If there is absolutely no hierarchical structure,
then the network presents no strategic targets because of it’s uniformity.
Paper [4] studies for example the effects of super-nodes have on worm propagation
in Gnutella. In Gnutella, normal nodes connect to supernodes which are
in turn connected to each other, acting as a kind of “highway”. It is shown [19]
they play a significant role in the worm propagation in the network, even without
being spe…

Middle attacks

As against man-in-the-middle attacks, very carefully chosen cryptographic protocols
may be a good attempt to stop such an attack. Pricing could also help
against the Sybil attack version. The problem with such solutions is that they
constitute a serious slow-down and harm the scalability of the network.
The main defense against Eclipse attacks is simply to use a pure P2P network
model. An even better solution would be to additionally use a randomization
algorithm to determine the nodes’ location (as for example in Freenet). If
the nodes in a pure P2P network are randomly distributed, than there are no
strategic positions and an attacker can’t control his nodes’ positions. It would
be nearly impossible to separate two subnetworks from one another in such
conditions.

3.4 Eclipse Attack

Before an attacker can launch an eclipse attack, he must gain control over a
certain amount of nodes along strategic routing paths.Once he has achieved
this, he can then separate the network in different subnetworks. Thus, if a node
wants to communicate with a node from the other subnetwork, his message must
at a certain point be routed through one of the attacker’s nodes. The attacker
thus “eclipses” each subnetwork from the other. In a way, eclipse attacks are
high-scale man-in-the-middle attacks.
An Eclipse attack can be the continuation of a Sybil attack. In this case,
the attacker will try to place his nodes on the strategic routing paths. We
argued before, that man-in-the-middle attacks don’t pose a great threat to P2P
networks. However, such a high scale attack involving strategic targeting is
very serious. The attacker can completely control a subnetwork from the other
subnetwork’s point of view.
If an attacker manages an Eclipse attack (it is not an easy attack), can attack
the network in …

P2P attacks

Unfortunately, without a central trusted authority, it is not possible to convincingly
stop Sybil attacks [10]. Maybe carefully configured reputation-based
systems might be able to slow the attack down, but it will not do much more.
Indeed, once the attacker has legally validated a certain amount of identities,
he can validate the rest.
A good defense is to render a Sybil attack unattractive by making it impossible
to place malicious identities in strategic positions. We have already seen that
structured P2P networks are more resilient to worm propagation. For the same
reasons it is a good defense mechanism here, as an attacker will not be able to
place his identities where he wishes. Randomly dispersed malicious identities
are far less dangerous than strategically placed ones, especially if the P2P network
is of considerable size.
Another proposition could be to include the node’s IP in it’s identifier. A malicious
node would thus not be able to spoof fake identities as he would be bound
to a limi…

3.3 Sybil Attack

Sybil attacks are part of the control plane category. The idea behind this attack
is that a single malicious identity can present multiple identities, and thus gain
control over part of the network.[10]
Once this has been accomplished, the attacker can abuse the protocol in any
way possible. For instance he might gain responsibility for certain files and
choose to pollute them. If the attacker can position his identities in a strategic
way, the damage can be considerable. He might choose to continue in an eclipse
attack, or slow down the network by rerouting all queries in a wrong direction.

3.2.1 Defenses

Although file poisoning attacks sound pretty dangerous, we will argue they
do not pose a threat to P2P networks [6]. The main problem is that P2P
applications are often set in the background. When a polluted file is downloaded
by a user, it stays available for a while before being inspected and cleansed. After
a period of time, all polluted files are eventually removed and the authentic
files become more available then the corrupted ones. The reason file-poisoning
attacks are still successful today are due to 3 factors:
• clients are unwilling to share (rational attack).
• corrupted files are not removed from users machines fast enough.
• users give up downloading if the download seemingly stalls.
These 3 factors each give advantage in different ways to the most available file,
which probably is the polluted file at the beginning. Simulations show these
factors tend to greatly slow down the removal of polluted files on the network.

3.2 File Poisoning

File poisoning attacks operate on the data plane and have become extremely
commonplace in P2P networks. The goal of this attack is to replace a file in the
network by a false one. This polluted file is of course of no use.
It has been reported [7][8][9], that the music industry have massively released
false content on P2P networks. Moreover, companies such as Overpeer1 or Retsnap
2 publicly offer their pollution-based services to the entertainment industry
as a way for protecting copyrighted materials.
In order to attack by file poisoning, malicious nodes will falsely claim owning a
file, and upon a request will answer with a corrupt file. For a certain amount
of money, Overpeer or Retsnap will release huge amounts of fake copies of a file
on their servers. Moreover, all messages passing through malicious node can be
poisoned (similar to a man-in-the-middle attack). These factors may give the
poisoned file a high availability, making it more attractive to download the true
file.

3.1 Rational Attacks

For P2P services to be effective, participating nodes must cooperate, but in most
scenarios a node represents a self-interested party and cooperation can neither
be expected nor enforced. A reasonable assumption is that a large fraction
of P2P nodes are rational and will attempt to maximize their consumption of
system resources while minimizing the use of their own.
For example nodes might realize that by not sharing, they save precious upload
bandwidth. In the case of copyrighted material, file sharing can have worst
outcomes. As it is illegal and quite easy for authorities to find out who is sharing
specific files, it can lead to a very big fine. These are good enough reasons to
motivate nodes in becoming “self-interested”. If a large number of nodes are
self-interested and refuse to contribute, the system may destabilize. Successful
P2P systems must be designed to be robust against this class of failure.

3.0 Specific P2P Attacks and Defenses

We will consider two different planes of attack in this section: the data plane
and the control plane. Attacking the data plane means attacking the data used
by the P2P application itself, for example by poisoning it or rendering it in any
way unavailable. On the other hand, attacking the control plane means directly
attacking the functionality of the P2P application, trying to render it slower
or as inefficient as possible. This is generally done by using weaknesses in the
routing protocol. Depending on the attacker’s goal, he will choose to attack in
one plane or the other, or both.
These two planes are not completely independent. For instance by attacking
on the data plane and corrupting many files, users will tend to download more
instances of a file thus slowing down the traffic which is typically the aim of a
control plane attack. Vice versa, eclipse attacks which are in the control plane
can render data unaccessible, which is the primary objective of a data plane
attack.
The possibilities of…

2.4 The Human Factor

The human factor should always be a consideration when security is at issue.
We previously saw that the upswing P2P applications have experienced is also
due to ease of installation and use, the low cost (most of the time free) and its
great rewards. Even novice users have little difficulty using such applications
to download files that other users shared intentionally or accidentally shared on
the P2P network.
This is yet another security problem P2P applications are posing. Empowering
a user, especially a novice, to make choices regarding the accessibility of their
files is a significant risk. Because of it’s convenient and familiar look, applications
such as Kazaa can cause a user to unwittingly share the contents of his
documents or even worst, his whole hard disk.
Unfortunately, novice users do not understand the implications of their inaction
with regard to security. Simply closing the application for instance isn’t enough
as most of them continue running in the background. Remarkably, milli…

2.3.1 Defenses

Before considering any technical defense, there must be a sensitization of P2P
users. Leaving a personal computer unattended without a complete firewall and
anti-virus on a broadband internet connection is begging for trouble. Blaster,
for example, exploited a vulnerability 5 days after it was made public by Microsoft
with a “Security Update” that fixed it.
A solution would be for P2P software developers not to write any bugged software!
Perhaps that is a far fetched goal, but it would be better to favor strongly
typed languages such as Java or C# instead of C or C++, where buffer overflows
are much easier to compute.
Another interesting observation is that hybrid P2P systems have a vulnerability
pure P2P systems do not. By making some nodes more special then others
(for example better connectivity for Gnutella’s supernodes) the attacker has the
possibility to target these strategic nodes first in order to spread the worm more
efficiently later on. Pure P2P does not offer such targets as all nodes…

2.3 Worm Propagation

Worms already pose one of the biggest threats to the internet. Currently, worms
such as Code Red or Nimda are capable of infecting hundreds of thousands of
hosts within hours and no doubt that better engineered worms would be able
to infect to reach the same result in a matter of seconds. Worms propagating
through P2P applications would be disastrous: it is probably the most serious
threat.
There are several factors which make P2P networks attractive for worms [13]:
• P2P networks are composed by computers all running the same software.
An attacker can thus compromise the entire network by finding only one
exploitable security hole.
• P2P nodes tend to interconnect with many different nodes. Indeed a
worm running on the P2P application would no longer loose precious time
scanning for other victims. It would simply have to fetch the list of the
victim’s neighboring nodes and spread on.
• P2P applications are used to transfer large files. Some worms have to
limit their size in order to hold in one TCP…

2.2.1 Defenses

Without a central trusted authority, which generally do not exist in P2P networks,
it is not possible to detect a man-in-the-middle attack. Nodes have
no information about their neighbors and have no way of being able to identify
them later with certainty. Fortunately, as man-in-the-middle attacks are mostly
useless in P2P networks, this is not very alarming news.

2.2 Man-in-the-middle Attack

In a man-in-the-middle attack, the attacker inserts himself undetected between
two nodes. He can then choose to stay undetected and spy on the communication
or more actively manipulate the communication. He can achieve this
by inserting, dropping or retransmitting previous messages in the data stream.
Man-in-the-middle attacks can thus achieve a variety of goals, depending on the
protocol. In many cases it is identity spoofing or dispatching false information.
Man-in-the-middle attacks are a nightmare in most protocols (especially when
there is a form of authentication). Fortunately, they are less interesting in P2P networks. All the nodes have the same “clearance” and the traffic’s content is
shared anyway which makes identity spoofing useless. If the P2P application
supports different clearances between nodes, then the implications of man-in the-
middle attacks would depend on the protocol itself. Possible attacks could
be spreading polluted files on behalf of trusted entities or broadcasting …

2.1.1 Defenses

The first problem is detecting a DOS attack as it can be mistaken with a heavy
utilization of the machine. DDOS attacks using reflection are extremely hard to
block due to the enormous number and diversity of machines a malicious user
can involve in the attack (virtually any machine can be turned into a zombie).
In addition, as the attacker is often only indirectly involved (he attacks through
the zombies and the reflective network), it is often impossible to identify the


Figure 2.1: A DDOS attack: The attacker sends the order to the computers he
personally controls (masters) which then forward it to the zombies, which DOS
as many machines as possible and spoof their IP to be the victim’s, who will
receive all the replies.
source of the attack. Because of these factors, there exists no general way of
blocking DOS attacks.
A widely used technique to hinder DOS attacks is “pricing”. The host will
submit puzzles to his clients before continuing the requested computation, thus
ensuring that the clients…

1.4 Thesis Organisation

This thesis will now be organised in 4 main sections:
1 First, we will look at several vulnerabilities or attacks found in general
networks.

2 We will then look at more specific attacks specially designed for P2P
networks.
After these two analysis, we will try to draw some first conclusions. We will
then proceed to our case study: Freenet.
3 We will thoroughly describe the Freenet structure.
4 Finally, we will try to find potential weaknesses in Freenet and ways to
improve them.
After this we will draw our final conclusions and explore possible new directions.
5

1.3 Future and Vulnerability

Some futurists believe P2P networks will trigger a revolution in the near future.
The ease of use, the huge choice and finally the low price (often free) have been
the main reason for the explosion of file-sharing applications over the past years.
Add to this the fact that internet connection speeds are steadily increasing, the
arrival of newer faster algorithms (Caltech’s FAST algorithm was clocked 6,000
times faster than the internet’s current protocol) as well as the incapacity to
control or monitor such networks. This P2P revolution simply means huge
quantities of data will be available almost instantly to anybody for free.
This, of course, is disturbing news for many industries (music, movie, game...)
as P2P networks provide an alternative way of acquiring many copyrighted
products. These industries have very actively been waging war against “digital
piracy” for a decade soon. The results of this war are controversial but as P2P
networks have never stopped growing during this period of time,…

1.2 Historical

Although P2P networking has existed for quite some time, it has only been
popularized recently and will probably be subject to even bigger revolutions in
the near future.
Napster was the first P2P application which really took off. The way it worked
was quite simple: a server indexed all the files each user had. When a client
queried Napster for a file, the central server would answer with a list of all indexed
clients who already possessed the file.
Napster-like networks are known now as first generation networks. Such networks
didn’t have a complicated implementation and often relied on a central

server (hybrid P2P). The central server model makes sense for many reasons:
it is an efficient way to handle searches and allows to retain control over the
network. However, it also means there is a single point of failure. When lawyers
decided Napster should be shut down, all they had to do was to disconnect the
server.
Gnutella was the second major P2P network. After Napster’s demise, the creators
of Gn…

1.1 Peer-to-Peer Network Definition

Throughout this thesis we will study peer-to-peer networks, henceforth we will
use the acronym P2P. A P2P network is a network that relies on computing
power of it’s clients rather than in the network itself [1]. This means the clients
(peers) will do the necessary operations to keep the network going rather than
a central server. Of course, there are different levels of peer-to-peer networking:
• Hybrid P2P: There is a central server which keeps information about
the network. The peers are responsible for storing the information. If they
want to contact another peer, they query the server for the address.
• Pure P2P: There is absolutely no central server or router. Each peer
acts as client and server at the same time. This is also sometimes referred
to as “serverless” P2P.
• Mixed P2P: Between “hybrid” and “pure” P2P networks. An example
of such a network is Gnutella which has no central server but clusters its
nodes around so-called “supernodes”.

Attacks on Peer-to-Peer Networks

Abstract
In this thesis, we collect information about known attacks on P2P networks. We
try to classify them as well as study the different possible defense mechanisms.
As a case study, we take Freenet, a third generation P2P system, which we
deeply analyze, including simulating possible behaviors and reactions. Finally,
we draw several conclusions about what should be avoided when designing P2P
applications and give a new possible approach to making a P2P application as
resilient as possible to malicious users.

Blackboard Learning System

All users have to register and need a username and password to access the software. At UAE University, students and faculty have the same ACE domain username and password for both the Oasis and linked Banner system and Blackboard 6.1, which makes it convenient. Once the user logs into Blackboard, a personal My Institution page is displayed. This main page has several main areas including a series of Navigation buttons, Navigation tabs (where the user can navigate between different sections of the program), a Module area (which contains announcements, course links etc.), a Tools area (containing utilities such as My Grades, Send an E-mail, Calendar etc.) and a Search Box (which can be used for information retrieval on the web from the Blackboard site itself). Within the Module area, students can see the courses they are registered for on Blackboard. The courses can also be accessed by clicking on the My Courses tab within the My Institution page.

The Blackboard course environment consis…

3. Basic Policies

3.1. The Association is a group of Sri Lankans (and their family members) for themselves and for the country. The
activities of the Association are mainly targeted for the benefit of the members and Sri Lanka. However, the
Executive Committee on behalf of the Association may consider providing services to non-members, though not
obligatory.
3.2. The Association closely collaborates with the Government of Sri Lanka, Sri Lanka Embassy in Japan, all Sri
Lankan organizations, Sri Lankan people living in Japan, Japanese institutions and Japanese people.
3.3. The activities of the Association shall be implemented with mutual respect and cooperation, punctuality, sense of
responsibility, friendship and solidarity among members.
3.4. All attempts shall be made to implement any activity in the most efficient, effective and economical manner with
the best possible, but affordable quality.
3.5. In implementation of projects and programs, in procurement of goods and services priority is given to Sri Lankan

Objectives

The objects the Association listed under Article 2 of the Association are as follows:
(a) To strengthen the Sri Lankan community in Japan;
(b) To promote social and cultural relations between Sri Lanka and Japan; and
(c) To contribute to the socioeconomic development of Sri Lanka
To accomplish the above objects, the organization should have a sound financial background as well as strong
organization. Therefore, the following object should also be added to develop a strategic plan.
(d) To strengthen the Association financially, structurally, and in membership.

Sri Lanka Asscociation in Japan Five-Year Strategic Plan (2006-2010)

1. Background
The Sri Lanka Association in Japan has a history of nearly three decades. The time has now come to restructure and
reorganize it with a new outlook and a greater vigor for the following reasons.
1.1. The Association has been in a dormant status for few years, although it had earned a good reputation in the
past.
1.2. His Excellency the Ambassador is very keen to reactivate and strengthen the Association so as to see it
rendering fruitful services to Sri Lankans living in Japan and for Sri Lanka.
1.3. Sri Lankan community in Japan and their expectations and capacities have diversified and expanded
significantly since the establishment of the Association.
1.4. Along with the globalization process and rapid development of information and communication technology,
services could be now rendered more efficiently and effectively than in the past.
Against this background, the constitution is revised at the Special General Meeting held on 04 June 2006. There is an
urgent need for a proper s…

1.1 Stars – Introduction

Stars are like our Sun, but there are many variations of them. One thing is true, they all begin there life by the spark of nuclear fusion at their cores.Almost every dot in the night sky that we see are stars. All of those stars exist within our Milky Way Galaxy. Very rarely will a lone star actually exist in the spaces between galaxies, it is the norm for stars to only exist within galaxies.There are two main groups of stars:* Population II Stars - old, metal poor stars* Population I stars - new, metal rich starsIn addition, there are two main endings of a stars life:* Normal stars - like our Sun - end their life as a Planetary Nebula and White Dwarf* Large stars end their life in a supernova and end up as a Neutron Star or Black HoleLifetime of a normal star:* Dust cloud forms a Main Sequence star that burns for about 10 billion years* Star ends Main Sequence life and swells to a Red Giant (about the size of Earth's orbit) and burns for 100 million years* Star sheds is layers a…

1.0 Galaxy

The image above - a screen grab from The Sky version 6 - demonstrates what this strength look like. In ancient times, this was called a river of milk, spilled by the gods. The name of this feature would then be called the Milky Way - and the name stuck.

The Milky Way is actually a galaxy - a system of billions of stars gathered by mutual gravitation. Our knowledge of our galaxy (and many others) is still very new but much progress has been made. By using radio observations, we were able to determine the structure of our galaxy.

Different Programming Languages

¯PL/ 1 (Programming Language 1)
It is a business and scientific language, suitable for batch Processing and terminal usage. Design to include the best features of FORTRAN and COBOL.

¯R.P.G. (Report Programming Generator)
R.P.G. is more of a system of preparing reports than a true language. It is widely used on mini and main frame to prepare business reports, accounts receivable, inventory listing, statement acts. R.P.G. is one of the easiest languages to learn.

¯BASIC (Beginners All- purpose Symbolic Instruction Code)
It is designed for easy data input, output and offers editing features.

¯APL
APL is one of the most powerful interactive languages yet developed.

¯PASCAL
It embodies the principles of structured programming long advocated by computer programming teachers and is a very powerful language.

¯C Language
The C programming language is one of the “smallest” programming languages. C is the one of the most flexible and versatile of all programming languages. It is being used to develop opera…

Disadvantages

less efficiency

 4th Generation Languages
#. Introduce packages (Eg: Word, Excel, Power Point) (Can be used by non computer professionals.)
#. Can sort these languages in to fore categories,
i. Financial Planning / Modeling Languages
ii. Query Languages
iii. Report Generators
iv. Application Generators

 5th Generation Languages
#. Introduces languages for Artificial Intelligent (AI) programming.
Eg: LISP, PROLOG

 Special Purpose Languages
These are “Tailor made” for a particular type of problem.
Eg: Simulation, Controls and experiments

 Command Languages
These are languages used to control the operations of the computer. (languages to instruct the operating system)
Eg: Ms Access

Different types of High Level Languages

i. Procedural languages
Eg: - Pascal, COBOL, C
ii. Object oriented languages
Eg: - Smalltalk, Java, C++
iii. Visual languages
Eg: - Visual Basic, Visual C++, Visual Foxpro, Visual Pascal
Advantages
User friendly
Lesser amount of instructions to be written
Machine independent
No H / W knowledge required

The Generations of Programming Languages

The programming languages have gone to several stages as below.a.1st Generation Languages – Machine Languagesb.2nd Generation Languages – Low-Level or SymbolicLanguagesc.3rd Generation Languages – High-Level or ProcedureLanguagesd.4th Generation Languagese.5th Generation Languages¯1st Generation Languages
#. Programs were written in machine code. #. Needed to remember machine codes and had to write many machineinstructions #. No need of translators. Therefore execution speed was high.#. Programs were machine dependent.#. Modifications of program were difficult. ¯2nd Generation Languages (Low-Level languages)
#. Programs were written in assembly language. Had to write too many instructions.
#.Translators were needed. (Assembler)
#. Programs were machine dependent.
#. Modifications of programs were difficult.

3rd Generation Languages (High-Level languages)
#. Introduce programming languages much more similar to English.
#. Have to write less number of instructions.
#. Programs were machine inde…