Friday, July 31, 2009

Financial

Financial economics is primarily concerned with building models to derive testable or policy implications from acceptable assumptions. Some fundamental ideas in financial economics are portfolio theory, the Capital Asset Pricing Model. Portfolio theory studies how investors should balance risk and return when investing in many assets or securities. The Capital Asset Pricing Model describes how markets should set the prices of assets in relation to how risky they are. The Modigliani-Miller Theorem describes conditions under which corporate financing decisions are irrelevant for value, and acts as a benchmark for evaluating the effects of factors outside the model that do affect value.

A common assumption is that financial decision makers act rationally (see Homo economicus; efficient market hypothesis). However, recently, researchers in experimental economics and experimental finance have challenged this assumption empirically. They are also challenged - theoretically - by behavioral finance, a discipline primarily concerned with the limits to rationality of economic agents.

Other common assumptions include market prices following a random walk, or asset returns being normally distributed. Empirical evidence suggests that these assumptions may not hold, and in practice, traders and analysts, and particularly risk managers, frequently modify the "standard models".

While in economics models are mainly employed to judge social welfare, financial economists are more concerned with empirical predictions.

Thursday, July 30, 2009

Computer Science


Computer science (or computing science) is the study of the theoretical foundations of information and computation, and of practical techniques for their implementation and application in computer systems.[1][2][3] It is frequently described as the systematic study of algorithmic processes that describe and transform information. According to Peter J. Denning, the fundamental question underlying computer science is, "What can be (efficiently) automated?"[4] Computer science has many sub-fields; some, such as computer graphics, emphasize the computation of specific results, while others, such as computational complexity theory, study the properties of computational problems. Still others focus on the challenges in implementing computations. For example, programming language theory studies approaches to describing computations, while computer programming applies specific programming languages to solve specific computational problems, and human-computer interaction focuses on the challenges in making computers and computations useful, usable, and universally accessible to people.

The general public sometimes confuses computer science with vocational areas that deal with computers (such as information technology), or think that it relates to their own experience of computers, which typically involves activities such as gaming, web-browsing, and word-processing. However, the focus of computer science is more on understanding the properties of the programs used to implement software such as games and web-browsers, and using that understanding to create new programs or improve existing ones.

Assignment_Lanka


Assignment Lanka Students may not having age limit and it is try to do something from beyond the normal thing environment One of our course objectives is to become a better writer in a about the information technology, by the way there is so many blogs available with having several topics, But in here we concern to give some thing the blog viewer. Form the binging we try to introduce some sample assignment with IT projects. And Point of View shows Nature projects, environmental projects.


Computer Science Assignment, refers to tasks assigned to students by their teachers to be completed mostly outside of class, and derives its name from the fact that most students do the majority of such work at home. Common homework assignments may include a quantity or period of reading to be performed, writing or typing to be completed, problems to be solved, a school project to be built (such as a diorama or display), or other skills to be practiced. Other than to than in here we discuss about Financial Assignment.

Wednesday, July 29, 2009

Randomization

A last interesting observation is randomization (for example in the routing protocol
or in the nodes’ location). Indeed, studies show that randomization has a
big impact on attacks as an attacker cannot deterministically attack the network
any longer. Unfortunately, it also considerably slows the network down. P2P
networks often have scalability problems and anything which slows performance
down is generally avoided. This is probably the main reason why randomization
is avoided in P2P networks.

based Systems


This condemnation of all hierarchical structures also makes us reject reputationbased
systems. Nodes in such systems have a “reputation” determined by all
other nodes [17]. Typically, each node will publish a list of the nodes it trusts,
making it impossible for a node to change its own reputation by itself. Before
initiating a download, a node will first check the reputation of the node it wants
to download from and then decide whether to pursue or not. In a sense, the
higher the reputation, the more importance a node has.
While this might seem like a good direction, we will argue that, as it introduces a notion of hierarchy, this approach constitutes a weakness. The problem is
that nodes with a bigger reputation have more powers than other nodes. Other
nodes will tend to trust them more and they are able to influence other nodes’
reputation more effectively. An attacker simply needs a little patience and wait
for one of his nodes to gain sufficient trust in order to launch his attack. If
the attacker deploys many malicious nodes as it is often the case, they can give
each other a high reputation making them all trustworthy. Finally, other famous
nodes constitute strategic targets as they will be able to spread the attack for
efficiently.

First conclusions 4.1 Only Pure P2P!

We have now been introduced to P2P networks and have observed most possible
attacks. So what are the first conclusions we can make at this point?
First of all, when designing a P2P network, it is of utmost importance not to
use a mixed P2P model. As soon as we enter any kind of notion of hierarchy,
we automatically present a target. If a node is more important, more trusted
or better connected than other nodes, then an attacker can use this to his advantage.
This permits malicious users to attack the the network strategically,
which is far more dangerous. If there is absolutely no hierarchical structure,
then the network presents no strategic targets because of it’s uniformity.
Paper [4] studies for example the effects of super-nodes have on worm propagation
in Gnutella. In Gnutella, normal nodes connect to supernodes which are
in turn connected to each other, acting as a kind of “highway”. It is shown [19]
they play a significant role in the worm propagation in the network, even without
being specifically targeted at the beginning. What better target to launch
a Sybil attack then such supernodes? Of course, pure P2P is much harder to
implement and also slower than the hierarchical approach: the implementation
of node querying is easy if all nodes sign in on a central server.

Middle attacks

As against man-in-the-middle attacks, very carefully chosen cryptographic protocols
may be a good attempt to stop such an attack. Pricing could also help
against the Sybil attack version. The problem with such solutions is that they
constitute a serious slow-down and harm the scalability of the network.
The main defense against Eclipse attacks is simply to use a pure P2P network
model. An even better solution would be to additionally use a randomization
algorithm to determine the nodes’ location (as for example in Freenet). If
the nodes in a pure P2P network are randomly distributed, than there are no
strategic positions and an attacker can’t control his nodes’ positions. It would
be nearly impossible to separate two subnetworks from one another in such
conditions.

3.4 Eclipse Attack

Before an attacker can launch an eclipse attack, he must gain control over a
certain amount of nodes along strategic routing paths.Once he has achieved
this, he can then separate the network in different subnetworks. Thus, if a node
wants to communicate with a node from the other subnetwork, his message must
at a certain point be routed through one of the attacker’s nodes. The attacker
thus “eclipses” each subnetwork from the other. In a way, eclipse attacks are
high-scale man-in-the-middle attacks.
An Eclipse attack can be the continuation of a Sybil attack. In this case,
the attacker will try to place his nodes on the strategic routing paths. We
argued before, that man-in-the-middle attacks don’t pose a great threat to P2P
networks. However, such a high scale attack involving strategic targeting is
very serious. The attacker can completely control a subnetwork from the other
subnetwork’s point of view.
If an attacker manages an Eclipse attack (it is not an easy attack), can attack
the network in a much more efficient manner.
• He can attack the control plane by inefficiently rerouting each message.
• He can decide to drop all messages he receives, thus completely separating
both subnetworks.
• He can attack the data plane by injecting polluted files or requesting
polluted files on behalf of a innocent nodes and hoping, these files are
cached or copied along the way.

P2P attacks

Unfortunately, without a central trusted authority, it is not possible to convincingly
stop Sybil attacks [10]. Maybe carefully configured reputation-based
systems might be able to slow the attack down, but it will not do much more.
Indeed, once the attacker has legally validated a certain amount of identities,
he can validate the rest.
A good defense is to render a Sybil attack unattractive by making it impossible
to place malicious identities in strategic positions. We have already seen that
structured P2P networks are more resilient to worm propagation. For the same
reasons it is a good defense mechanism here, as an attacker will not be able to
place his identities where he wishes. Randomly dispersed malicious identities
are far less dangerous than strategically placed ones, especially if the P2P network
is of considerable size.
Another proposition could be to include the node’s IP in it’s identifier. A malicious
node would thus not be able to spoof fake identities as he would be bound
to a limited number of IPs and could be noticed and denounced if he created
more identities. Yet this solution is far from simple as other attacks are rendered
possible, such as generating fake identities for other nodes and then accusing
them of being malicious. This is why we will not consider this defense as it adds
a layer of complexity to the existing protocol whilst generating other potential
weaknesses.
Several papers propose a central trusted authority as a solution, as well as a
complicated public-private key based protocol [11]. Each node should sign his
messages, and respond to a challenge by the authority every now and then. It
is clear that an attacker simulating many identities would need enormous resources
in order to be able to answer all the challenges periodically submitted
to each of his identities. While this certainly tries to solve the problem, it is unsatisfactory:
this solution breaks the P2P model by reintroducing a centralized
point of failure, which can easily be attacked.

3.3 Sybil Attack

Sybil attacks are part of the control plane category. The idea behind this attack
is that a single malicious identity can present multiple identities, and thus gain
control over part of the network.[10]
Once this has been accomplished, the attacker can abuse the protocol in any
way possible. For instance he might gain responsibility for certain files and
choose to pollute them. If the attacker can position his identities in a strategic
way, the damage can be considerable. He might choose to continue in an eclipse
attack, or slow down the network by rerouting all queries in a wrong direction.

3.2.1 Defenses

Although file poisoning attacks sound pretty dangerous, we will argue they
do not pose a threat to P2P networks [6]. The main problem is that P2P
applications are often set in the background. When a polluted file is downloaded
by a user, it stays available for a while before being inspected and cleansed. After
a period of time, all polluted files are eventually removed and the authentic
files become more available then the corrupted ones. The reason file-poisoning
attacks are still successful today are due to 3 factors:
• clients are unwilling to share (rational attack).
• corrupted files are not removed from users machines fast enough.
• users give up downloading if the download seemingly stalls.
These 3 factors each give advantage in different ways to the most available file,
which probably is the polluted file at the beginning. Simulations show these
factors tend to greatly slow down the removal of polluted files on the network.

3.2 File Poisoning

File poisoning attacks operate on the data plane and have become extremely
commonplace in P2P networks. The goal of this attack is to replace a file in the
network by a false one. This polluted file is of course of no use.
It has been reported [7][8][9], that the music industry have massively released
false content on P2P networks. Moreover, companies such as Overpeer1 or Retsnap
2 publicly offer their pollution-based services to the entertainment industry
as a way for protecting copyrighted materials.
In order to attack by file poisoning, malicious nodes will falsely claim owning a
file, and upon a request will answer with a corrupt file. For a certain amount
of money, Overpeer or Retsnap will release huge amounts of fake copies of a file
on their servers. Moreover, all messages passing through malicious node can be
poisoned (similar to a man-in-the-middle attack). These factors may give the
poisoned file a high availability, making it more attractive to download the true
file.

3.1 Rational Attacks

For P2P services to be effective, participating nodes must cooperate, but in most
scenarios a node represents a self-interested party and cooperation can neither
be expected nor enforced. A reasonable assumption is that a large fraction
of P2P nodes are rational and will attempt to maximize their consumption of
system resources while minimizing the use of their own.
For example nodes might realize that by not sharing, they save precious upload
bandwidth. In the case of copyrighted material, file sharing can have worst
outcomes. As it is illegal and quite easy for authorities to find out who is sharing
specific files, it can lead to a very big fine. These are good enough reasons to
motivate nodes in becoming “self-interested”. If a large number of nodes are
self-interested and refuse to contribute, the system may destabilize. Successful
P2P systems must be designed to be robust against this class of failure.

3.0 Specific P2P Attacks and Defenses

We will consider two different planes of attack in this section: the data plane
and the control plane. Attacking the data plane means attacking the data used
by the P2P application itself, for example by poisoning it or rendering it in any
way unavailable. On the other hand, attacking the control plane means directly
attacking the functionality of the P2P application, trying to render it slower
or as inefficient as possible. This is generally done by using weaknesses in the
routing protocol. Depending on the attacker’s goal, he will choose to attack in
one plane or the other, or both.
These two planes are not completely independent. For instance by attacking
on the data plane and corrupting many files, users will tend to download more
instances of a file thus slowing down the traffic which is typically the aim of a
control plane attack. Vice versa, eclipse attacks which are in the control plane
can render data unaccessible, which is the primary objective of a data plane
attack.
The possibilities of attacks are enormous in P2P networks. Now follows an
analysis of the most common attacks as well as some appropriate defense mechanisms.

2.4 The Human Factor

The human factor should always be a consideration when security is at issue.
We previously saw that the upswing P2P applications have experienced is also
due to ease of installation and use, the low cost (most of the time free) and its
great rewards. Even novice users have little difficulty using such applications
to download files that other users shared intentionally or accidentally shared on
the P2P network.
This is yet another security problem P2P applications are posing. Empowering
a user, especially a novice, to make choices regarding the accessibility of their
files is a significant risk. Because of it’s convenient and familiar look, applications
such as Kazaa can cause a user to unwittingly share the contents of his
documents or even worst, his whole hard disk.
Unfortunately, novice users do not understand the implications of their inaction
with regard to security. Simply closing the application for instance isn’t enough
as most of them continue running in the background. Remarkably, millions
of P2P peers are left running unattended and vulnerable for large periods of

2.3.1 Defenses

Before considering any technical defense, there must be a sensitization of P2P
users. Leaving a personal computer unattended without a complete firewall and
anti-virus on a broadband internet connection is begging for trouble. Blaster,
for example, exploited a vulnerability 5 days after it was made public by Microsoft
with a “Security Update” that fixed it.
A solution would be for P2P software developers not to write any bugged software!
Perhaps that is a far fetched goal, but it would be better to favor strongly
typed languages such as Java or C# instead of C or C++, where buffer overflows
are much easier to compute.
Another interesting observation is that hybrid P2P systems have a vulnerability
pure P2P systems do not. By making some nodes more special then others
(for example better connectivity for Gnutella’s supernodes) the attacker has the
possibility to target these strategic nodes first in order to spread the worm more
efficiently later on. Pure P2P does not offer such targets as all nodes have the
same “importance”.
Finally, it is interesting to note the operating system developers are also offering
some solutions. OpenBSD’s 3.8 release now returns pseudo-random memory
addresses. This makes buffer overflows close to impossible as an attacker cannot
know what data segment he should overwrite [15].

2.3 Worm Propagation

Worms already pose one of the biggest threats to the internet. Currently, worms
such as Code Red or Nimda are capable of infecting hundreds of thousands of
hosts within hours and no doubt that better engineered worms would be able
to infect to reach the same result in a matter of seconds. Worms propagating
through P2P applications would be disastrous: it is probably the most serious
threat.
There are several factors which make P2P networks attractive for worms [13]:
• P2P networks are composed by computers all running the same software.
An attacker can thus compromise the entire network by finding only one
exploitable security hole.
• P2P nodes tend to interconnect with many different nodes. Indeed a
worm running on the P2P application would no longer loose precious time
scanning for other victims. It would simply have to fetch the list of the
victim’s neighboring nodes and spread on.
• P2P applications are used to transfer large files. Some worms have to
limit their size in order to hold in one TCP packet. This problem would
not be encountered in P2P worms and they could thus implement more
complicated behaviors.
• The protocols are generally not viewed as mainstream and hence receive
less attention from intrusion detection systems.
• P2P programs often run on personal computers rather than servers. It is
thus more likely for an attacker to have access to sensitive files such as
credit card numbers, passwords or address books.
• P2P users often transfer illegal content (copyrighted music, pornography
...) and may be less inclined to report an unusual behavior of the system.
• The final and probably most juicy quality P2P networks possess is their
potentially immense size.
Once worms finish propagating, their goal is usually to launch massive DDOS
attacks (W32/Generic.worm!P2P, W32.SillyP2P, ...) against political or commercial
targets (whitehouse.gov, microsoft.com, ...).

2.2.1 Defenses

Without a central trusted authority, which generally do not exist in P2P networks,
it is not possible to detect a man-in-the-middle attack. Nodes have
no information about their neighbors and have no way of being able to identify
them later with certainty. Fortunately, as man-in-the-middle attacks are mostly
useless in P2P networks, this is not very alarming news.

2.2 Man-in-the-middle Attack

In a man-in-the-middle attack, the attacker inserts himself undetected between
two nodes. He can then choose to stay undetected and spy on the communication
or more actively manipulate the communication. He can achieve this
by inserting, dropping or retransmitting previous messages in the data stream.
Man-in-the-middle attacks can thus achieve a variety of goals, depending on the
protocol. In many cases it is identity spoofing or dispatching false information.
Man-in-the-middle attacks are a nightmare in most protocols (especially when
there is a form of authentication). Fortunately, they are less interesting in P2P networks. All the nodes have the same “clearance” and the traffic’s content is
shared anyway which makes identity spoofing useless. If the P2P application
supports different clearances between nodes, then the implications of man-in the-
middle attacks would depend on the protocol itself. Possible attacks could
be spreading polluted files on behalf of trusted entities or broadcasting on behalf
of a super node.

Tuesday, July 28, 2009

2.1.1 Defenses


The first problem is detecting a DOS attack as it can be mistaken with a heavy
utilization of the machine. DDOS attacks using reflection are extremely hard to
block due to the enormous number and diversity of machines a malicious user
can involve in the attack (virtually any machine can be turned into a zombie).
In addition, as the attacker is often only indirectly involved (he attacks through
the zombies and the reflective network), it is often impossible to identify the


Figure 2.1: A DDOS attack: The attacker sends the order to the computers he
personally controls (masters) which then forward it to the zombies, which DOS
as many machines as possible and spoof their IP to be the victim’s, who will
receive all the replies.
source of the attack. Because of these factors, there exists no general way of
blocking DOS attacks.
A widely used technique to hinder DOS attacks is “pricing”. The host will
submit puzzles to his clients before continuing the requested computation, thus
ensuring that the clients go through an equally expensive computation. DOS
attacks are most efficient when the attacker consumes most of his victim’s resources
whilst investing very few resources himself. If each attempt to flood his
victim results in him having to solve a puzzle beforehand, it becomes more difficult
to launch a successful DOS attack. “Pricing” can be modified so that when
the host perceives to be under an attack, it gives out more expensive puzzles,
and therefore reduces the effect of the attack. Although this method is effective
against a small number of simultaneous attackers, it more or less fails against
very distributed attacks. Other drawbacks are that some legitimate clients, such
as mobile devices, might perceive puzzles too hard and/or would waste limited
battery power to them.

DOS Attacks

1.4 Thesis Organisation

This thesis will now be organised in 4 main sections:
1 First, we will look at several vulnerabilities or attacks found in general
networks.

2 We will then look at more specific attacks specially designed for P2P
networks.
After these two analysis, we will try to draw some first conclusions. We will
then proceed to our case study: Freenet.
3 We will thoroughly describe the Freenet structure.
4 Finally, we will try to find potential weaknesses in Freenet and ways to
improve them.
After this we will draw our final conclusions and explore possible new directions.
5

1.3 Future and Vulnerability

Some futurists believe P2P networks will trigger a revolution in the near future.
The ease of use, the huge choice and finally the low price (often free) have been
the main reason for the explosion of file-sharing applications over the past years.
Add to this the fact that internet connection speeds are steadily increasing, the
arrival of newer faster algorithms (Caltech’s FAST algorithm was clocked 6,000
times faster than the internet’s current protocol) as well as the incapacity to
control or monitor such networks. This P2P revolution simply means huge
quantities of data will be available almost instantly to anybody for free.
This, of course, is disturbing news for many industries (music, movie, game...)
as P2P networks provide an alternative way of acquiring many copyrighted
products. These industries have very actively been waging war against “digital
piracy” for a decade soon. The results of this war are controversial but as P2P
networks have never stopped growing during this period of time, it is acceptable
to think that they will steadily grow on and gain even more importance in the
future.

1.2 Historical

Although P2P networking has existed for quite some time, it has only been
popularized recently and will probably be subject to even bigger revolutions in
the near future.
Napster was the first P2P application which really took off. The way it worked
was quite simple: a server indexed all the files each user had. When a client
queried Napster for a file, the central server would answer with a list of all indexed
clients who already possessed the file.
Napster-like networks are known now as first generation networks. Such networks
didn’t have a complicated implementation and often relied on a central

server (hybrid P2P). The central server model makes sense for many reasons:
it is an efficient way to handle searches and allows to retain control over the
network. However, it also means there is a single point of failure. When lawyers
decided Napster should be shut down, all they had to do was to disconnect the
server.
Gnutella was the second major P2P network. After Napster’s demise, the creators
of Gnutella wanted to create a decentralized network, one that could not
be shut down by simply turning off a server. At first the model did not scale
because of bottlenecks created whilst searching for files. FastTrack solved this
problem by rendering some nodes more capable than others. Such networks
are now known as second generation networks and are the most widely used
nowadays [1].
Third generation networks are the new emerging P2P networks. They are
a response to the legal attention P2P networks have been receiving for a few
years and have built-in anonymity features. They have not yet reached the mass
usage main second generation networks currently endure but this could change
shortly. Freenet is a good example of a third generation P2P network, that is
the reason why we will study it more deeply during this thesis.

1.1 Peer-to-Peer Network Definition

Throughout this thesis we will study peer-to-peer networks, henceforth we will
use the acronym P2P. A P2P network is a network that relies on computing
power of it’s clients rather than in the network itself [1]. This means the clients
(peers) will do the necessary operations to keep the network going rather than
a central server. Of course, there are different levels of peer-to-peer networking:
• Hybrid P2P: There is a central server which keeps information about
the network. The peers are responsible for storing the information. If they
want to contact another peer, they query the server for the address.
• Pure P2P: There is absolutely no central server or router. Each peer
acts as client and server at the same time. This is also sometimes referred
to as “serverless” P2P.
• Mixed P2P: Between “hybrid” and “pure” P2P networks. An example
of such a network is Gnutella which has no central server but clusters its
nodes around so-called “supernodes”.

Attacks on Peer-to-Peer Networks

Abstract
In this thesis, we collect information about known attacks on P2P networks. We
try to classify them as well as study the different possible defense mechanisms.
As a case study, we take Freenet, a third generation P2P system, which we
deeply analyze, including simulating possible behaviors and reactions. Finally,
we draw several conclusions about what should be avoided when designing P2P
applications and give a new possible approach to making a P2P application as
resilient as possible to malicious users.

Blackboard Learning System

All users have to register and need a username and password to access the software. At UAE University, students and faculty have the same ACE domain username and password for both the Oasis and linked Banner system and Blackboard 6.1, which makes it convenient. Once the user logs into Blackboard, a personal My Institution page is displayed. This main page has several main areas including a series of Navigation buttons, Navigation tabs (where the user can navigate between different sections of the program), a Module area (which contains announcements, course links etc.), a Tools area (containing utilities such as My Grades, Send an E-mail, Calendar etc.) and a Search Box (which can be used for information retrieval on the web from the Blackboard site itself). Within the Module area, students can see the courses they are registered for on Blackboard. The courses can also be accessed by clicking on the My Courses tab within the My Institution page.

The Blackboard course environment consists of two views. The Student View is the only view available to students enrolled on a course. In this view, a number of navigation buttons can be accessed. The exact navigation buttons themselves can be customized by the instructor, but typically include buttons such as Announcements, Course Information, Course Documents, Faculty Information, Assignments, Websites for the Course, Tools, Communication and Assignments. In contrast, the Control Panel is only available to Instructors, and this is the place where the Instructor manages the entire course and essentially constructs and tailors the course in their own way. As a user of Blackboard for several semesters now, I would highly recommend Blackboard as a course management tool for Instructors. Most Instructors need only general familiarity with the standard Windows environment to quickly come to grips with the system. In our Department and College, we have run introductory workshops in-house for new faculty on Blackboard, and it is our experience that this initial kick-start training period can even be limited to approximately ninety minutes. Once faculty are au fait with the basic features of Blackboard, the numerous additional features of the system can be explored at a later stage, when the Instructor begins to post material on the system, do on-line quizzes etc. Of course, it is essential that the University has a central Blackboard Support and Help Centre on a permanent basis to help faculty using the system and to run more comprehensive training programmes of some of the more advanced features. Additional features can then be explored and newsletters outlining tips on using Blackboard for Active Learning purposes, problems encountered, new features etc. are also a useful way of showing faculty the true benefit of this powerful software.

The ease of use of Blackboard is exceptional. As a former user of WebCT, I must admit I found the Blackboard interface and navigation easier to learn at the beginning especially in relation to the posting of course information, announcements etc. One of the nice new features of Version 6.1, is the WYSIWYG (What You See Is What You Get) and spell-check facilities of the text-box editor. This facility was not available in earlier editions, and having access to standard Word buttons such as Bold, Italics, Justification etc. is a welcome new feature. However, as a chemist, it still is not possible to create subscripts and superscripts smoothly using Blackboard i.e. the panel of buttons is limited and it was not possible to create the customary superscript and subscript buttons, as can be done neatly in Word, by dragging these down via Tools, Customize and Commands. Of course, one can easily work around this problem by creating your text in FrontPage, and pasting the HTML code in the textbox, or simply include the HTML tags. Another possibility is to use the embedded WebEQ Equation Editor. However, all these methods are somewhat cumbersome, especially for chemists.

Adding course information is also similar to posting an announcement, and the information can be added as an item, folder, external link etc. Faculty information can also be posted readily. A nice feature of this utility is that separate folders can be created for faculty; for example if you have joint faculty co-teaching on a course. Within each folder, separate profiles can be created, with useful information for the student such as an Instructors office hours, their e-mail address, the location of their office on campus, their homepage URL etc. In addition, the photograph of the Instructor can be posted. However, it is advised that for optimum results, a picture of 150 x 150 pixels in size should be used. Course Documents also has similar features to Course Information, and PowerPoint slides, Word documents etc. can be posted here, which may correspond to different chapters of a textbook etc. Furthermore, as the course is only accessible to the students and the Instructor teaching the course, not everybody can see the material. PowerPoint slides can also be posted in such a way that the students can only see the slides, without being able to edit them if an Instructor wishes. An e-mail can also be sent to all students and Instructors having access to the course, which is an excellent facility of the system. This makes efficient and prompt direct contact with the students.

One other new feature which was introduced in the 6th release of the Blackboard Learning System has been that of the Assignment Manager. This new tool actually combines the file exchange capabilities of the Digital Drop Box, with the functionality of the Gradebook in Blackboard. The Digital Drop Box is still present in the system, and can be used to transfer files to users. This is an excellent feature, as instead of forwarding e-mail attachments, one can send a file to a student very quickly through Digital Drop Box. I have used this facility several times in my own classes teaching General Chemistry and Engineering Applications, where the students use their own personal Laptops in class, in a wireless Network environment.2 However, one problem with this facility that I found is that you can only remove one file at a time. This can be tedious if you receive say twenty-five files from students as homework assignments. There is no select all, delete facility. In contrast, the new tool, Assignment Manager is an area where course assignments can be posted, related files can be uploaded and grades published. It is the latter point that really distinguishes this feature from the Digital Drop Box. The Digital Drop Box should be used if you wish to exchange files between students etc, but where you do not wish to give grades. The former in comparison should be employed where a final grade will be assigned to a student’s work.

One of the most useful facilities of Blackboard has to be its Assessment facilities. In Pool Manager, a bank of questions with no point values can be created by an Instructor. Pool Manager can then be used to generate questions for on-line quizzes, exercises and tests. This facility should be used before importing the question banks into the Test Manager. One key advantage of Pool Manager is that the pool of questions can easily be readily exported. This gives great flexibility in courses where multiple Instructors are involved, as each can create banks of questions and transfer them to each other. With this utility, vast libraries of question banks can be built up in a Department on an ongoing basis each semester. Blackboard itself has the provision for seven different types of questions: multiple-choice, true/false, fill in the blank, order, multiple answer, match two lists and essay. Although the latter can be used, in the opinion of this reviewer, this type of question is probably not best suited to Blackboard, as there is a limit on the twenty answer patterns that can be used, and spelling mistakes, additional spaces and punctuation can invalidate an answer. In addition, an essay type question needs the Instructor to grade it. Having created a pool of questions in Pool Manager, the questions can then be imported into Test Manager for use in a test. One slightly annoying feature in Version 6, is that when you import a bank of questions from Pool Manager, there is no select all facility, which surprisingly was present in an earlier version. Hence, one has to physically go through each question and tick its box to import the question. This can be very time-consuming especially if you create an MCQ test for students of approximately 100 questions. Another cautionary note which academic users should be aware of is in relation to undesired student’s behaviour during online assessments. In several classes I have had the problems that students get an error message during an on-line test stating that they have already chosen to go to the next question, and please wait etc. These messages according to my colleagues at the Blackboard Support Unit at the University, appear to be due to the undesired behaviour of double-clicking the submit or next button. As the Web is a single-click environment, where double-clicking is not necessary on standard web pages, this seems to be the root of this problem, which can throw some students out in on-line assessments. The problem became so widespread in some of my classes, that I now have to mention this to them on a continuous basis to get the message across in order to avoid such error messages. Hopefully the developers will try and see some way round this potential problem in a future release.

I tried also bringing chemical structures, which I created in ISIS Draw 2.53 into Blackboard in the Test Manager. This can easily be done, using the Creation Settings button. I saved a structure of an organic ligand, which I created initially in ISIS Draw, and converted it to a gif file using Microsoft PhotoEditor. I then was able to import this directly into Test Manager.

However, the best feature of the Assessment Tools is that of the Gradebook. This can easily be customized and rearranged to include mid-Semester and final examinations, quizzes, progress examinations etc. Once an on-line quiz or progress examination is taken on Blackboard, the grades are automatically imported into the Gradebook, which then can be weighted accordingly and can even be downloaded into an Excel spreadsheet in CSV file format. This feature is excellent, and with the collective utilities of the Test Manager and Gradebook, it has saved me personally hours of monotonous grading for many of my courses, where I employ MCQ type questions. I would definitely recommend Blackboard to any faculty thinking along the lines of a Laptop project type initiative.2

Blackboard has several other neat advanced features such as a Discussion Board, a Collaboration Session facility, Survey Manager and an excellent Course Statistics package, where you can track your student’s usage of the course materials.

3. Basic Policies

3.1. The Association is a group of Sri Lankans (and their family members) for themselves and for the country. The
activities of the Association are mainly targeted for the benefit of the members and Sri Lanka. However, the
Executive Committee on behalf of the Association may consider providing services to non-members, though not
obligatory.
3.2. The Association closely collaborates with the Government of Sri Lanka, Sri Lanka Embassy in Japan, all Sri
Lankan organizations, Sri Lankan people living in Japan, Japanese institutions and Japanese people.
3.3. The activities of the Association shall be implemented with mutual respect and cooperation, punctuality, sense of
responsibility, friendship and solidarity among members.
3.4. All attempts shall be made to implement any activity in the most efficient, effective and economical manner with
the best possible, but affordable quality.
3.5. In implementation of projects and programs, in procurement of goods and services priority is given to Sri Lankan
suppliers. However, considering the price, quality and experience the Executive Committee can deviate from this
policy where necessary.
3.6. The Association will be completely neutral in political matter and there shall be no discrimination on account of
religion, race or sex. Political issues shall never be discussed at any of the meetings of the Association including
all Committee Meetings.
3.7. All projects and programs formulated by each Sub Committee should be submitted to the Executive Committee
along with the cost estimates for approval. Any expenditure or even commitment should not be made until the
approval of the Executive Committee. However, in extremely urgent matters the President has the authority to
grant personal approval, in consultation with the Secretary and the Treasurer for an activity incurring a cost not
more than Yen 50,000.

Objectives

The objects the Association listed under Article 2 of the Association are as follows:
(a) To strengthen the Sri Lankan community in Japan;
(b) To promote social and cultural relations between Sri Lanka and Japan; and
(c) To contribute to the socioeconomic development of Sri Lanka
To accomplish the above objects, the organization should have a sound financial background as well as strong
organization. Therefore, the following object should also be added to develop a strategic plan.
(d) To strengthen the Association financially, structurally, and in membership.

Sri Lanka Asscociation in Japan Five-Year Strategic Plan (2006-2010)

1. Background
The Sri Lanka Association in Japan has a history of nearly three decades. The time has now come to restructure and
reorganize it with a new outlook and a greater vigor for the following reasons.
1.1. The Association has been in a dormant status for few years, although it had earned a good reputation in the
past.
1.2. His Excellency the Ambassador is very keen to reactivate and strengthen the Association so as to see it
rendering fruitful services to Sri Lankans living in Japan and for Sri Lanka.
1.3. Sri Lankan community in Japan and their expectations and capacities have diversified and expanded
significantly since the establishment of the Association.
1.4. Along with the globalization process and rapid development of information and communication technology,
services could be now rendered more efficiently and effectively than in the past.
Against this background, the constitution is revised at the Special General Meeting held on 04 June 2006. There is an
urgent need for a proper strategic plan for implementation in order to accomplish the objects listed in the constitution.

Monday, July 27, 2009

1.1 Stars – Introduction


Stars are like our Sun, but there are many variations of them. One thing is true, they all begin there life by the spark of nuclear fusion at their cores.

Almost every dot in the night sky that we see are stars. All of those stars exist within our Milky Way Galaxy. Very rarely will a lone star actually exist in the spaces between galaxies, it is the norm for stars to only exist within galaxies.

There are two main groups of stars:

* Population II Stars - old, metal poor stars

* Population I stars - new, metal rich stars

In addition, there are two main endings of a stars life:

* Normal stars - like our Sun - end their life as a Planetary Nebula and White Dwarf

* Large stars end their life in a supernova and end up as a Neutron Star or Black Hole

Lifetime of a normal star:

* Dust cloud forms a Main Sequence star that burns for about 10 billion years

* Star ends Main Sequence life and swells to a Red Giant (about the size of Earth's orbit) and burns for 100 million years

* Star sheds is layers as a Planetary Nebula lasting 100,000 years

* Only the core of the star remains as a White Dwarf

1.0 Galaxy



The image above - a screen grab from The Sky version 6 - demonstrates what this strength look like. In ancient times, this was called a river of milk, spilled by the gods. The name of this feature would then be called the Milky Way - and the name stuck.

The Milky Way is actually a galaxy - a system of billions of stars gathered by mutual gravitation. Our knowledge of our galaxy (and many others) is still very new but much progress has been made. By using radio observations, we were able to determine the structure of our galaxy.

Sunday, July 26, 2009

Different Programming Languages

¯ PL/ 1 (Programming Language 1)
It is a business and scientific language, suitable for batch Processing and terminal usage. Design to include the best features of FORTRAN and COBOL.

¯ R.P.G. (Report Programming Generator)
R.P.G. is more of a system of preparing reports than a true language. It is widely used on mini and main frame to prepare business reports, accounts receivable, inventory listing, statement acts. R.P.G. is one of the easiest languages to learn.

¯ BASIC (Beginners All- purpose Symbolic Instruction Code)
It is designed for easy data input, output and offers editing features.

¯ APL
APL is one of the most powerful interactive languages yet developed.

¯ PASCAL
It embodies the principles of structured programming long advocated by computer programming teachers and is a very powerful language.

¯ C Language
The C programming language is one of the “smallest” programming languages. C is the one of the most flexible and versatile of all programming languages. It is being used to develop operating systems, business applications,
text processing, database applications and spreadsheets.

¯ C++ Language
C++ was developed from the C programming language and, with few exceptions retrains as a subset. It is one of the recently developed languages supporting the new style of object oriented program designing.

¯ LISP (LIST Processing)
The language has become widely used fore computer science research, most prominently in the area of artificial intelligence.

¯ PROLOG (PROgram LOGic)
PROLOG is another language that has been used for artificial intelligence applications.

¯ Small Talk
Small Talk was developed at Xerox’s Palo Alto Research Center and implemented with the help of Alan Kay. It is an object oriented language used of Xerox’s original graphical windows systems.

¯ HTML (HyperText Mark up Language)
HTML is a text based mark up language that provides the understanding pinning for one of the most exiting information search and navigation environment called World Wide Web.
It is a way representing text and linking that text to other kinds of resources- including sound files, graphics files and multimedia files.
HTML can be considered as a text file that contains 2 kinds of text.
i. The content -Text or information for display or playback on the
clients.
Screen speakers
ii. The markup -Text or information to control the display or to
point to other.
Information on items in need of display

¯ JAVA
Most of the Java code will be implemented in browser based applets, the user must be running a browser that support Java called a “Java enable browser” such as Netscape navigator.

¯ Visual Basic
With visual basic Graphical User Interface (GUI) you can design screen presentations and menus using built in controls and tables. The GUI facilitates importing screen presentations, graphics and data from other applications.

Disadvantages

less efficiency

 4th Generation Languages
#. Introduce packages (Eg: Word, Excel, Power Point) (Can be used by non computer professionals.)
#. Can sort these languages in to fore categories,
i. Financial Planning / Modeling Languages
ii. Query Languages
iii. Report Generators
iv. Application Generators

 5th Generation Languages
#. Introduces languages for Artificial Intelligent (AI) programming.
Eg: LISP, PROLOG

 Special Purpose Languages
These are “Tailor made” for a particular type of problem.
Eg: Simulation, Controls and experiments

 Command Languages
These are languages used to control the operations of the computer. (languages to instruct the operating system)
Eg: Ms Access

Different types of High Level Languages

i. Procedural languages
Eg: - Pascal, COBOL, C
ii. Object oriented languages
Eg: - Smalltalk, Java, C++
iii. Visual languages
Eg: - Visual Basic, Visual C++, Visual Foxpro, Visual Pascal
Advantages
User friendly
Lesser amount of instructions to be written
Machine independent
No H / W knowledge required

The Generations of Programming Languages

The programming languages have gone to several stages as below.

a. 1st Generation Languages – Machine Languages

b. 2nd Generation Languages – Low-Level or Symbolic Languages

c. 3rd Generation Languages – High-Level or Procedure Languages

d. 4th Generation Languages

e. 5th Generation Languages

¯ 1st Generation Languages
#. Programs were written in machine code.

#. Needed to remember machine codes and had to write many machine instructions

#. No need of translators. Therefore execution speed was high.

#. Programs were machine dependent.

#. Modifications of program were difficult.

¯ 2nd Generation Languages (Low-Level languages)
#. Programs were written in assembly language. Had to write too many instructions.
#.Translators were needed. (Assembler)
#. Programs were machine dependent.
#. Modifications of programs were difficult.

3rd Generation Languages (High-Level languages)
#. Introduce programming languages much more similar to English.
#. Have to write less number of instructions.
#. Programs were machine independent.
#. Modifications of programs were easy.
#. Can classify in to three categories as below,
i. Commercial Languages – Eg: - R.P.G., COBOL
ii. Scientific Languages – Eg: - FORTRAN, ALGOL, BASIC
iii. Multipurpose Languages – Eg: - PL/1

Tag

Assignment Lanka Tag Cloud
Computer Networks The History of Local Area Networks, LAN, The Topologies of a Networks, LANs describe different types of transmission Medias, Local Area Networks Access Methods, Carrier Sense Multiple Access with Collision Detect, Development of LAN Technologies. LAN -Token Ring, LAN Ethernet Digital, LAN - Ethernet Sun microsystems, LAN - Ethernet Mixed Environment, LAN - Token Ring was introduced by IBM LAN - IBM implementation of Token Ring, Token Ring Novell, LAN Token Ring - in a mixed environment, LAN - Fiber Distributed Data Interface, LAN - ATM, LAN Components, LAN Switching Methods, Virtual Local Area Network, Port based VLAN, Mac based VLAN, Protocol based VLAN, User Base VLAN, PC networks Components, PC networks Shared resources, PC Network operating systems, PC networks Novell Netware, PC networks Windows NT, PC networks IBM LAN Server Computer Programming Languages HTML Language, The Generations of Programming Languages, Different types of High Level Languages, Different types of High Level Languages Disadvantages
Computer Networks - IBM LAN Server, Windows NT Networks, Novell Netware, Network operating systems, Networks Shared, Networks Components, User Base, Protocol based, Mac based, Port based, VLAN, LAN Switching, LAN Components, ATM, Fiber Data, Token Ring, Token Ring Novell, IBM implementation, Ethernet, Sun microsystems, Ethernet Digital, Token passing, LAN Technologies, CSMA/CD, Access Methods, Transmission, Networks, The History of Local Area Networks, LAN