Search This Blog

Tuesday, November 25, 2008

Statistic Assumptions

• Normal distribution of data (which can be tested by using a normality test, such as the Shapiro-Wilk and Kolmogorov-Smirnov tests).

• Equality of variances (which can be tested by using the F test, the more robust Levene's test, Bartlett's test, or the Brown-Forsythe test).

• Samples may be independent or dependent, depending on the hypothesis and the type of samples:

o Independent samples are usually two randomly selected groups

o Dependent samples are either two groups matched on some variable (for example, age) or are the same people being tested twice (called repeated measures)

Since all calculations are done subject to the null hypothesis, it may be very difficult to come up with a reasonable null hypothesis that accounts for equal means in the presence of unequal variances. In the usual case, the null hypothesis is that the different treatments have no effect — this makes unequal variances untenable. In this case, one should forgo the ease of using this variant afforded by the statistical packages. See also Behrens–Fisher problem.
One scenario in which it would be plausible to have equal means but unequal variances is when the 'samples' represent repeated measurements of a single quantity, taken using two different methods. If systematic error is negligible (e.g. due to appropriate calibration) the effective population means for the two measurement methods are equal, but they may still have different levels of precision and hence different variances.

Determining type

For novices, the most difficult issue is often whether the samples are independent or dependent. Independent samples typically consist of two groups with no relationship. Dependent samples typically consist of a matched sample (or a "paired" sample) or one group that has been tested twice (repeated measures).
Dependent t-tests are also used for matched-paired samples, where two groups are matched on a particular variable. For example, if we examined the heights of men and women in a relationship, the two groups are matched on relationship status. This would call for a dependent t-test because it is a paired sample (one man paired with one woman). Alternatively, we might recruit 100 men and 100 women, with no relationship between any particular man and any particular woman; in this case we would use an independent samples test.
Another example of a matched sample would be to take two groups of students, match each student in one group with a student in the other group based on an achievement test result, then examine how much each student reads. An example pair might be two students that score 90 and 91 or two students that scored 45 and 40 on the same test. The hypothesis would be that students that did well on the test may or may not read more. Alternatively, we might recruit students with low scores and students with high scores in two groups and assess their reading amounts independently.
An example of a repeated measures t-test would be if one group were pre- and post-tested. (This example occurs in education quite frequently.) If a teacher wanted to examine the effect of a new set of textbooks on student achievement, (s)he could test the class at the beginning of the year (pretest) and at the end of the year (posttest). A dependent t-test would be used, treating the pretest and posttest as matched variables (matched by student).

Statistic Uses

Among the most frequently used t tests are:

* A test of whether the mean of a normally distributed population has a value specified in a null hypothesis.
* A test of the null hypothesis that the means of two normally distributed populations are equal. Given two data sets, each characterized by its mean, standard deviation and number of data points, we can use some kind of t test to determine whether the means are distinct, provided that the underlying distributions can be assumed to be normal. All such tests are usually called Student's t tests, though strictly speaking that name should only be used if the variances of the two populations are also assumed to be equal; the form of the test used when this assumption is dropped is sometimes called Welch's t test. There are different versions of the t test depending on whether the two samples are
o unpaired, independent of each other (e.g., individuals randomly assigned into two groups, measured after an intervention and compared with the other group[4]), or
o paired, so that each member of one sample has a unique relationship with a particular member of the other sample (e.g., the same people measured before and after an intervention[4]).

If the calculated p-value is below the threshold chosen for statistical significance (usually the 0.10, the 0.05, or 0.01 level), then the null hypothesis which usually states that the two groups do not differ is rejected in favor of an alternative hypothesis, which typically states that the groups do differ.

* A test of whether the slope of a regression line differs significantly from 0.

Once a t value is determined, a p-value can be found using a table of values from Student's t-distribution.

Monday, November 24, 2008

Correlation

From the free encyclopedia

This article is about the correlation coefficient between two variables. For other uses, see Correlation (disambiguation).

Several sets of (x, y) points, with the correlation coefficient of x and y for each set. Note that the correlation reflects the noisiness and direction of a linear relationship (top row), but not the slope of that relationship (middle), nor many aspects of nonlinear relationships (bottom). N.B.: the figure in the center has a slope of 0 but in that case the correlation coefficient is undefined because the variance of Y is zero.

Look up Correlation in
Wiktionary, the free dictionary.
In probability theory and statistics, correlation (often measured as a correlation coefficient) indicates the strength and direction of a linear relationship between two random variables. That is in contrast with the usage of the term in colloquial speech, denoting any relationship, not necessarily linear. In general statistical usage, correlation or co-relation refers to the departure of two random variables from independence. In this broad sense there are several coefficients, measuring the degree of correlation, adapted to the nature of the data.
A number of different coefficients are used for different situations. The best known is the Pearson product-moment correlation coefficient, which is obtained by dividing the covariance of the two variables by the product of their standard deviations. Despite its name, it was first introduced by Francis Galton.[1]

Statistical assumptions

When the number of measurements, N, is larger than the number of unknown parameters, k, and the measurement errors εi (see below) are normally distributed then the excess of information contained in N - k) measurements is used make the following statistical predictions about the unknown parameters:
• confidence intervals of unknown parameters.

Independent measurements

Quantitatively, this is explained by the following example: Consider a regression model with, say, three unknown parameters β0, β1 and β2. An experimenter performed 10 repeated measurements at exactly the same value of independent variables X. In this case regression analysis fails to give a unique value for the three unknown parameters: the experimenter did not provide enough information. The best one can do is to calculate the average value of the dependent variable Y and its standard deviation.
If the experimenter had performed five measurements at X1, four at X2 and one at X3, where X1, X2 and X3 are different values of the independent variable X then regression analysis would provide a unique solution to unknown parameters β.
In the case of general linear regression (see below) the above statement is equivalent to the requirement that matrix XTX is regular (that is: it has an inverse matrix).

Regression equation

It is convenient to assume an environment in which an experiment is performed: the dependent variable is then outcome of a measurement.

The regression equation deals with the following variables:
• The unknown parameters denoted as β. This may be a scalar or a vector of length k.
• The independent variables, X.
• The dependent variable, Y.

Regression equation is a function of variables X and β.

The user of regression analysis must make an intelligent guess about this function. Sometimes the form of this function is known, sometimes he must apply a trial and error process.
Assume now that the vector of unknown parameters, β is of length k. In order to perform a regression analysis the user must provide information about the dependent variable Y:

• If the user performs the measurement N times, where N < k, regression analysis cannot be performed: there is not provided enough information to do so.

• If the user performs N independent measurements, where N = k, then the problem reduces to solving a set of N equations with N unknowns β.

• If, on the other hand, the user provides results of N independent measurements, where N > k regression analysis can be performed. Such a system is also called an overdetermined system;

In the last case the regression analysis provides the tools for:

1. finding a solution for unknown parameters β that will, for example, minimize the distance between the measured and predicted values of the dependent variable Y (also known as method of least squares).

2. under certain statistical assumptions the regression analysis uses the surplus of information to provide statistical information about the unknown parameters β and predicted values of the dependent variable Y.

Regression diagnostics

Once a regression model has been constructed, it may be important to confirm the goodness of fit of the model and the statistical significance of the estimated parameters. Commonly used checks of goodness of fit include the R-squared, analyses of the pattern of residuals and hypothesis testing. Statistical significance can be checked by an F-test of the overall fit, followed by t-tests of individual parameters.

Interpretations of these diagnostic tests rest heavily on the model assumptions. Although examination of the residuals can be used to invalidate a model, the results of a t-test or F-test are sometimes more difficult to interpret if the model's assumptions are violated. For example, if the error term does not have a normal distribution, in small samples the estimated parameters will not follow normal distributions, which complicates inference. With relatively large samples, however, a central limit theorem can be invoked such that hypothesis testing may proceed using asymptotic approximations.

Regression analysis

From the free encyclopedia

In statistics, regression analysis is a collective name for techniques for the modeling and analysis of numerical data consisting of values of a dependent variable (also called response variable or measurement) and of one or more independent variables (also known as explanatory variables or predictors). The dependent variable in the regression equation is modeled as a function of the independent variables, corresponding parameters ("constants"), and an error term. The error term is treated as a random variable. It represents unexplained variation in the dependent variable. The parameters are estimated so as to give a "best fit" of the data. Most commonly the best fit is evaluated by using the least squares method, but other criteria have also been used.

Regression can be used for prediction (including forecasting of time-series data), inference, hypothesis testing, and modeling of causal relationships. These uses of regression rely heavily on the underlying assumptions being satisfied. Regression analysis has been criticized as being misused for these purposes in many cases where the appropriate assumptions cannot be verified to hold.[1][2] One factor contributing to the misuse of regression is that it can take considerably more skill to critique a model than to fit a model

Friday, August 29, 2008

The Content Side

Blog info

Logic With Markov
" Technology markov for images...."
Other Markov Data
" At the intersection of statistical physics and probability theory .... "
All About Hardware
"..... the technologi hardware is always ....."
MathType
"MathTypeTM is an intelligent mathematical equation editor designed for personal computers running Microsoft Windows ...." -
E-Commerce
Have you ever thought about working an eCommerce business?
Dreamweaver
"Macromedia Dreamweaver MX 2004 is a professional HTML editor for designing,..... "
Notebook Buying Tips?

Monday, August 25, 2008

X10 Sample Scenarios

X10 Sample Scenarios

By Whome


Here are a few situations that might suit the use of X10 devices, based on my own experience.

Outbuilding lights
If you have a detached garage or shed, or other outbuildings, it can be very convenient to be able to switch the lights on and off from inside the main building as well as from the outbuilding. Normally this would require two-way switching, with a three-core cable run from the house to the outbuilding. This would be in addition to any other cabling you may have installed already. Suppose you have a detached garage about 60 feet from your house, as I do. You have already run a heavy armoured cable from the house to the garage, and don't want to run any more. With an X10 lamp module in the garage you can control the garage lighting from anywhere; no extra cabling is required. In addition, you can install multiple switches in the garage if it has separate vehicle and people doors.

Coupling room lights
Suppose you merged two rooms in a house to make a single large room. For example, in my area, houses have separate living rooms and dining rooms; commonly people knock down the dividing wall to make one large room. But you now have a long room with two doors, and separate light switches. What you really want is for both switches to operate both lights. No problem: just replace the light switches with X10 lights witch modules, set to the same unit code. Either light switch will then control both lights.

Convenience switching
When I moved into my house the lights for the attic were controlled by a small switch in the attic itself. To get any light in the attic I had to get into it and then wander around in total darkness looking for the switch. Eventually I rewired it so that the switch was in the room below. However, it would have been much easier to install an X10 pendant lamp module on the attic light, and then it could have been controlled from anywhere. Similar logic applies to other inaccessible areas.

Branch Office

Branch Office in a Box

Deliver local IT services locally. Consolidate multiple server functions onto a single device to simplify management and streamline branch office IT.

Local Services Delivered Locally

Users enjoy great response to print, authorization or network service requests at the edge with local branch office IT services. Eliminate remote hardware and simplify IT management and administration by consolidating multiple server functions like file services, print services and domain main control or authentication onto a single device.

Print Services

Print services provided by Packeteer appliances handle printing locally without requiring print requests to traverse the WAN. Reduce branch office IT administration and associated costs by eliminating dedicated branch office print servers. Print services include:

  • Native Windows "Point and Print" architecture
  • Active Directory-enabled printer browsing
  • Simplified print cluster installation/administration
  • Web-based administration

Domain Controller Services

Handle user logon processes, authentication and directory searches for Windows domains locally at the branch office. Packeteer Domain Controller services use the native functionality available on Windows Server 2003, with:

  • Local processing of authentication and login requests
  • Support for user login and authentication services during WAN outages

Network Services

Packeteer network services take the headache out of administering branch office network access while improving end user performance. A single Packeteer appliance can host DNS and DHCP server functions—IT administrators can consolidate critical networking services into a single footprint. Also, with:

  • DNS caching for branch office user name resolution
  • Assignment of IP addresses for branch office users via DHCP

Web Caching Services

Enjoy faster delivery of Web pages via HTTP object caching—providing rapid and seamless branch office Web access and lower bandwidth consumption. Built on Microsoft's ISA (Internet, Security and Acceleration) server technology, Packeteer Web caching services meet the performance, management, and scalability needs of high-volume Internet traffic environments with centralized server management, including:

  • Support for secure Web sites accessed via SSL
  • Easily-configured port numbers and cache size
  • Scheduled cache pre-population
  • Optimization for "split-tunnel" branch office environments with direct Internet access

Management Services

Packeteer management services optimize software and application distribution at the branch office. Realize the benefits of WAFS for Microsoft SMS (Systems Management Server) packages—including faster package download and caching at the branch office Also:

  • Centralized software distribution for remote office network
  • Faster download of software or application packages to the branch office
  • No repeat transmission of upgrade or application packages
  • Seamless integration with WAFS deployments

Increase WAN

Compression/Caching

Quickly increase WAN capacity, improving application performance and user response times with application-intelligent, file-aware compression and caching.

Increase WAN Capacity for Application Traffic

Packeteer's application-intelligent compression technologies deliver an easy way to quickly increase WAN capacity over the same physical links, improving application performance and user response times. Packeteer's unique PacketShaper architecture enables continuous improvement in compression gains—while ActiveTunnel simplifies compression setup and configuration between two PacketShapers.

Pipe

Compression with control means that the added capacity from virtual bandwidth goes to your applications with the highest priority.

Application-intelligent Symmetric Compression

Compression works between two PacketShapers—symmetrically—to engage specialized, low-latency compression algorithms. Using multiple traffic compression techniques—including multiple algorithms, fragment caching, header compression and packet bundling—our Layer 7 Plus application intelligence helps identify different application types and apply the optimal compression techniques to each—or not at all.

Application-specific Selective Compression

To optimize compression, set up separate dictionary caches or different applications—avoiding the dilution effect where large amounts of other traffic weaken compression effectiveness. PacketShaper also chooses a different compression approach for packet payload—instead of for packet headers—apply "two pass" compression to less latency-sensitive applications.

Sometimes choosing not to compress an application is as important as compressing it. Applications—like SSL, jpeg files, VoIP data payloads and already-compressed Citrix® traffic—don't generally benefit from compression and are not worth injecting even latency minimal amounts. When it makes sense, PacketShaper’s application intelligence opts not to compress, saving resources and getting better overall compression results.

Compression results vary, depending on application mix. Beware of promised 5:1 or even 10:1 compression ratios—which are based on best-case tests. A more realistic range is 2:1 to 3:1; however, 4:1 to 5:1 can be achieved if you have more compressible traffic types.

Plug-In Architecture

Packeteer's plug-in architecture enables Packeteer to add new, application-specific compression algorithms over time. We continue to update our compression technologies—releasing four new algorithms since 2003 and improving effectiveness by about 40 percent. As plug-ins, new classifications can be easily downloaded as they become available—without waiting for a major software release.

Minimize Latency: MTU Management, Packing and Rate Control

To minimize latency and further accelerate traffic, PacketShaper's MTU (maximum transmission unit) management automatically adjusts MTU size to eliminate excess delays from link serialization delay or increase MTU to eliminate overhead from headers and acknowledgements.

Selectively bundling or concatenating packets, PacketShaper evaluates packet and MTU size as well as network timing to determine if, and when, combining multiple compressed packets into a single larger packet makes sense. Packing reduces overhead and improves compression gains.

ActiveTunnel: Automatic Setup and Overload Protection

Packeteer's ActiveTunnel feature automatically detects Xpress-enabled PacketShapers on the network and builds acceleration tunnels between them. Beyond enabling Xpress—a simple matter of toggling "on"—no configuration is required to set up or maintain the tunnels.

Since high traffic volume overloads compression efforts—actually increasing latency—PacketShaper automatically detects overloaded situations and backs off or steps up, as appropriate. Traffic shaping still takes precedence over compression when the network gets swamped

Thursday, August 14, 2008

AutoMate


AutoMate enables technology vendors to provide seamless and transparent application automation.

AutoMate has long been the software-of-choice for many technology companies looking to enhance the automation capabilities of their solutions. Whether the solution is based on packaged or web-based software, or on a combination of software and hardware, AutoMate provides all the power needed for complete solution and application automation.

With AutoMate, automation can be accomplished extremely quickly without the need for programming expertise. This makes a solution involving AutoMate very affordable. But AutoMate also provides the power and reliability that top-tier technology companies demand. Its client/server architecture allows for multiple-machine, cross-platform processing on anywhere from one to thousands of computers. Its centralized management tools provide a unified view of all automation across the computer network, and ensure reliable execution of all automation processes.

Technology



AutoMate automates repetitive IT tasks without requiring code or syntax.

IT managers are under intense pressures to manage ever-expanding computer networks while providing users with better service. They must do this with fixed budgets and lofty expectations that include the elimination of downtime and real-time access to data. Faced with these pressures, demands, and limitations, IT managers need software that can streamline and automate time-consuming and costly IT processes.

With AutoMate, automation can be accomplished extremely quickly without the need for programming expertise. This makes a solution involving AutoMate very affordable. But AutoMate also provides the power and reliability that business users demand. Its client/server architecture allows for multiple-machine, cross-platform processing on anywhere from one to thousands of computers. Its centralized management tools provide a unified view of all automation across the computer network, and ensure reliable execution of all automation processes. IT managers from all industries rely on AutoMate's breadth of capabilities and ease-of-use to streamline their networks.

Monday, August 4, 2008

Dasar Teori Bab II

BAB II
TINJAUAN PUSTAKA


2.1 Teori Dasar Komunikasi
2.1.1 Definisi Dasar Komunikasi
2.1.2 Kajian Teori Komunikasi
2.2 Pengertian Komunikasi Kelompok
2.2.1 Karakteristik Komunikasi Kelompok
2.2.2 Pengertian Kelompok
2.2.3 Timbulnya Kelompok
2.2.4 Klasifikasi Kelompok
2.2.5 Tujuan Kelompok
2.2.6 Karakteristik Kelompok
2.2.7 Diskusi Kelompok
2.2.8 Komposisi Kelompok
2.3 Kohesivitas Kelompok
2.3.1 Pengertian Kohesivitas
2.3.2 Aspek-aspek Kohesivitas
2.3.3 Pengertian Kohesivitas Kelompok
2.3.4 Kekuatan Kohesivitas Kelompok
2.4 Teori ( Pembahasan )
2.4.1 Dasar ( Pembahasan )
2.4.2 Pengertian ( Pembahasan )
2.4.3 Komponen ( Pembahasan )
2.4.4 Pembentukan dan Perubahan ( Pembahasan )

Teori Validitas Ordinal

TEORI VALIDITAS DATA ORDINAL

This is information about the teori of validitas with ordinal data that you can use for your more information referensi, this is only for my clien in indonesia so i made it in 'bahasa'. This is the data ;

Validitas menunjukkan ukuran yang benar-benar mengukur apa yang akan diukur. Jadi dapat dikatakan semakin tinggi validitas suatu alat test, maka alat test tersebut semakin mengenai pada sasarannya, atau semakin menunjukkan apa yang seharusnya diukur. Suatu test dapat dikatakan mempunyai validitas tinggi apabila test tersebut menjalankan fungsi ukurnya, atau memberikan hasil ukur sesuai dengan makna dan tujuan diadakannya test tersebut. Jika peneliti menggunakan kuesioner di dalam pengumpulan data penelitian, maka item-item yang disusun pada kuesioner tersebut merupakan alat test yang harus mengukur apa yang menjadi tujuan penelitian.
Salah satu cara untuk menghitung validitas suatu alat test yaitu dengan melihat daya pembeda item (item discriminality). Daya pembeda item adalah metode yang paling tepat digunakan untuk setiap jenis test. Daya pembeda item dalam penalitian ini dilakukan denan cara : “ korelasi item-total ”.
Korelasi item-total yaitu konsistensi antara skor item dengan skor secara keseluruhan yang dapat dilihat dari besarnya koefisien korelasi antara setiap item dengan skor keseluruhan, yang dalam penelitian ini menggunakan koefisien korelasi Rank – Spearman dengan langkah-langkah perhitungan sebagai berikut :

Koefisien Korelasi Rank Sperman
Apabila item yang dihadapi berbentuk skala ordinal (skala sikap), maka untuk nilai korelasi rank spearman pada item ke-i adalah :



Rumus diatas digunakan apabila tidak terdapat data kembar, atau terdapat data kembar namun sedikit. Apabila terdapat banyak data kembar digunakan rumus berikut ini



dimana : R(X) = Ranking nilai X
R(Y) = Ranking nilai Y

Bila koefisien korelasi untuk seluruh item telah dihitung, perlu ditentukan angka terkecil yang dapat dianggap cukup “ tinggi ” sebagai indikator adanya konsistensi antara skor item dan skor keseluruhan. Dalam hal ini tidak ada batasan yang tegas. Prinsip utama pemilihan item dengan melihat koefisien korelasi adalah mencari harga koefisien yang setinggi mungkin dan menyingkirkan setiap item yang mempunyai korelasi negatif (-) atau koefisien yang mendekati nol (0,00).
Menurut Friedenberg (1995) biasanya dalam pengembangan dan penyusunan skala-skala psikologi, digunakan harga koefisien korelasi yang minimal sama dengan 0,30. Dengan demikian, semua item yang memiliki korelasi kurang dari 0,30 dapat disisihkan dan item-item yang akan dimasukkan dalam alat test adalah item-item yang memiliki korelasi diatas 0,30 dengan pengertian semakin tinggi korelasi itu mendekati angka satu (1,00) maka semakin baik pula konsistensinya (validitasnya).

Or you can download it Download

Teori Reliabilitas Ordinal

RELIABILITAS DATA ORDINAL

Reliabilitas artinya adalah tingkat keterpercayaan hasil suatu pengukuran. Pengukuran yang memiliki reliabilitas tinggi, yaitu pengukuran yang mampu memberikan hasil ukur yang terpercaya (reliabel). Reliabilitas merupakan salah satu ciri atau karakter utama intrumen pengukuran yang baik. Kadang-kadang reliabilitas disebut juga sebagai keterpercayaan, keterandalan, keajegan, konsistensi, kestabilan, dan sebagainya, namun ide pokok dalam konsep reliabilitas adalah sejauh mana hasil suatu pengukuran dapat dipercaya, artinya sejauh mana skor hasil pengukuran terbebas dari kekeliruan pengukuran (measurement error).
Tinggi rendahnya reliabilitas, secara empiris ditunjukkan oleh suatu angka yang disebut koefisien reliabilitas. Walaupun secara teoritis, besarnya koefisien reliabilitas berkisar antara 0,00 – 1,00; akan tetapi pada kenyataannya koefisien reliabilitas sebesar 1,00 tidak pernah dicapai dalam pengukuran, karena manusia sebagai subjek pengukuran psikologis merupakan sumber kekeliruan yang potensial. Di samping itu walaupun koefisien korelasi dapat bertanda positif (+) atau negatif (-), akan tetapi dalam hal reliabilitas, koefisien reliabilitas yang besarnya kurang dari nol (0,00) tidak ada artinya karena interpretasi reliabilitas selalu mengacu kepada koefisien reliabilitas yang positif.
Teknik perhitungan koefisien reliabilitas yang digunakan disini adalah dengan menggunakan Koefisien Reliabilitas Alpha yang dihitung dengan menggunakan rumus sebagai berikut :



dimana :
k adalah banyaknya belahan item
Si2 adalah varians dari item ke-i
S2total adalah total varians dari keseluruhan item
Bila koefisien reliabilitas telah dihitung, maka untuk menentukan keeratan hubungan bisa digunakan kriteria Guilford (1956), yaitu :
1. kurang dari 0,20 : Hubungan yang sangat kecil dan bisa diabaikan
2. 0,20 - < 0,40 : Hubungan yang kecil (tidak erat)
3. 0,40 - < 0,70 : Hubungan yang cukup erat
4. 0,70 - < 0,90 : Hubungan yang erat (reliabel)
5. 0,90 - < 1,00 : Hubungan yang sangat erat (sangat reliabel)
6. 1,00 : Hubungan yang sempurna


SUMBER :
Guilford ,J.P., Psychometric Methods , Tata McGraw-Hill Publishing Company Limited 1979.
Friedenberg, Lisa, Psychological Testing, Design, Analysis and Use, Allyn and Bacon 1995

Teori Reliabilitas Nominal

RELIABILITAS DATA NOMINAL

Reliabilitas artinya adalah tingkat keterpercayaan hasil suatu pengukuran. Pengukuran yang memiliki reliabilitas tinggi, yaitu pengukuran yang mampu memberikan hasil ukur yang terpercaya (reliabel). Reliabilitas merupakan salah satu ciri atau karakter utama intrumen pengukuran yang baik. Kadang-kadang reliabilitas disebut juga sebagai keterpercayaan, keterandalan, keajegan, konsistensi, kestabilan, dan sebagainya, namun ide pokok dalam konsep reliabilitas adalah sejauh mana hasil suatu pengukuran dapat dipercaya, artinya sejauh mana skor hasil pengukuran terbebas dari kekeliruan pengukuran (measurement error).
Tinggi rendahnya reliabilitas, secara empiris ditunjukkan oleh suatu angka yang disebut koefisien reliabilitas. Walaupun secara teoritis, besarnya koefisien reliabilitas berkisar antara 0,00 – 1,00; akan tetapi pada kenyataannya koefisien reliabilitas sebesar 1,00 tidak pernah dicapai dalam pengukuran, karena manusia sebagai subjek pengukuran psikologis merupakan sumber kekeliruan yang potensial. Di samping itu walaupun koefisien korelasi dapat bertanda positif (+) atau negatif (-), akan tetapi dalam hal reliabilitas, koefisien reliabilitas yang besarnya kurang dari nol (0,00) tidak ada artinya karena interpretasi reliabilitas selalu mengacu kepada koefisien reliabilitas yang positif.
Teknik perhitungan koefisien reliabilitas yang digunakan disini adalah dengan menggunakan Koefisien Reliabilitas Kuder-Richardson (KR-20), metode ini merupakan koefisien reliabilitas yang dapat menggambarkan variasi dari item-item untuk jawaban benar/salah yang diberi skor 0 atau 1 (Guilford and Benjamin, 1978).

Koefisien Reliabilitas Kuder-Richardson (KR-20) dapat dihitung dengan menggunakan rumus sebagai berikut :



dimana : n = jumlah item
S2 = Varians total
p = Proporsi dari orang yang menjawab benar pada item ke-i.
1- p = Proporsi dari orang yang menjawab salah pada item = q

Bila koefisien reliabilitas telah dihitung, maka untuk menentukan keeratan hubungan bisa digunakan kriteria Guilford (1956), yaitu :
1. kurang dari 0,20 : Hubungan yang sangat kecil dan bisa diabaikan
2. 0,20 - < 0,40 : Hubungan yang kecil (tidak erat)
3. 0,40 - < 0,70 : Hubungan yang cukup erat
4. 0,70 - < 0,90 : Hubungan yang erat (reliabel)
5. 0,90 - < 1,00 : Hubungan yang sangat erat (sangat reliabel)
6. 1,00 : Hubungan yang sempurna




SUMBER :
Guilford ,J.P., Psychometric Methods , Tata McGraw-Hill Publishing Company Limited 1979.
Friedenberg, Lisa, Psychological Testing, Design, Analysis and Use, Allyn and Bacon 1995

Teori Validitas Nominal

TEORI VALIDITAS DATA NOMINAL

A. VALIDITAS
Validitas menunjukkan ukuran yang mengukur apa yang akan diukur. Jadi dapat dikatakan semakin tinggi validitas suatu alat test, maka alat test tersebut semakin mengenai pada sasarannya, atau semakin menunjukkan apa yang seharusnya diukur. Suatu test dapat dikatakan mempunyai validitas tinggi apabila test tersebut menjalankan fungsi ukurnya, atau memberikan hasil ukur sesuai dengan makna dan tujuan diadakannya test tersebut. Jika peneliti menggunakan kuesioner di dalam pengumpulan data penelitian, maka item-item yang disusun pada kuesioner tersebut merupakan alat test yang harus mengukur apa yang menjadi tujuan penelitian.
Salah satu cara untuk menghitung validitas suatu alat test yaitu dengan melihat daya pembeda item (item discriminality). Daya pembeda item adalah metode yang paling tepat digunakan untuk setiap jenis test. Daya pembeda item dalam penalitian ini dilakukan denan cara : “ korelasi item-total ”. Korelasi item-total yaitu konsistensi antara skor item dengan skor secara keseluruhan yang dapat dilihat dari besarnya koefisien korelasi antara setiap item dengan skor keseluruhan, yang dalam penelitian ini menggunakan koefisien korelasi Point Biserial dengan langkah-langkah perhitungan sebagai berikut :

Koefisien Korelasi Point Biserial
Apabila bentuk item adalah dichotomous (correct/incorrect, true/false). Rumus untuk korelasi point-biserial pada item ke-i adalah :



dimana : X =Rata-rata pada test untuk semua orang
Xi =Rata-rata pada test hanya untuk orang-orang yang menjawab benar pada item ke-i
p = Proporsi dari orang yang menjawab benar pada item ke-i.
1- p = Proporsi dari orang yang menjawab salah pada item ke-i.
SDx = Standar deviasi pada test untuk semua orang

Bila koefisien korelasi untuk seluruh item telah dihitung, perlu ditentukan angka terkecil yang dapat dianggap cukup “ tinggi ” sebagai indikator adanya konsistensi antara skor item dan skor keseluruhan. Dalam hal ini tidak ada batasan yang tegas. Prinsip utama pemilihan item dengan melihat koefisien korelasi adalah mencari harga koefisien yang setinggi mungkin dan menyingkirkan setiap item yang mempunyai korelasi negatif (-) atau koefisien yang mendekati nol (0,00).
Menurut Friedenberg (1995) biasanya dalam pengembangan dan penyusunan skala-skala psikologi, digunakan harga koefisien korelasi yang minimal sama dengan 0,30. Dengan demikian, semua item yang memiliki korelasi kurang dari 0,30 dapat disisihkan dan item-item yang akan dimasukkan dalam alat test adalah item-item yang memiliki korelasi diatas 0,30 dengan pengertian semakin tinggi korelasi itu mendekati angka satu (1,00) maka semakin baik pula konsistensinya (validitasnya).

Sunday, July 27, 2008

Win Zip

Win Zip
WinZip® Quick Start Guide

Copyright © 1991-2005, WinZip International LLC
All Rights Reserved.

WinZip is a registered trademark of WinZip International LLC


About the Quick Start Guide

This Guide introduces some file compression terms, describes some of the initial steps in installing WinZip, and provides a first look at using some WinZip features. For additional information, see the tutorials that come with WinZip, the WinZip help file, and the WinZip web site at http://www.winzip.com.

What is an Archive or Zip File, Anyway?

Zip files are "archives" used for storing and distributing files, and can contain one or more files. Usually the files "archived" in a Zip are compressed to save space. Zip files are often used to:

• Distribute files on the Internet: Only one Zip file transfer operation (download) is required to obtain all related files, and file transfer is quicker because the archived files are compressed.

• Send a group of related files to an associate: When you distribute the collection of files as an archive, you benefit from the file grouping and compression as well.

• Save disk space: If you have large files that are important but seldom used, such as large data files, simply compress these files into an archive and then unzip (or "extract") them only when needed.

What Does WinZip Do?

WinZip makes it easy for Windows users to work with archives. WinZip features an intuitive point-and-click drag-and-drop interface for viewing, running, extracting, adding, and deleting files in archives with a standard Windows interface, and also provides a Wizard interface that further simplifies the process of working with Zip files.

About WinZip's Setup Options

During the WinZip setup procedure you are asked to select either the WinZip Classic interface or the WinZip Wizard interface.

• WinZip Classic: The powerful WinZip Classic interface is preferred if you have a general understanding of Windows and of Zip files. Most users will be quite comfortable with its Explorer-like interface once the basics of Zip files are understood.

• WinZip Wizard: The WinZip Wizard guides you through some of the most common operations involving Zip files. If you are new to Windows or unfamiliar with Zip files, you may wish to start with the Wizard and switch later to the more powerful Classic interface.



Dreamweaver

Welcome to Dreamweaver

Macromedia Dreamweaver MX 2004 is a professional HTML editor for designing, coding, and developing websites, web pages, and web applications. Whether you enjoy the control of hand-coding HTML or prefer to work in a visual editing environment, Dreamweaver provides you with helpful tools to enhance your web creation experience.
The visual editing features in Dreamweaver let you quickly create pages without writing a line of code. You can view all your site elements or assets and drag them from an easy-to-use panel directly into a document. You can streamline your development workflow by creating and editing images in Macromedia Fireworks or another graphics application, then importing them directly into Dreamweaver, or by adding Macromedia Flash objects.
Dreamweaver also provides a full-featured coding environment that includes code-editing tools (such as code coloring and tag completion) and reference material on HTML, Cascading Style Sheets (CSS), JavaScript, ColdFusion Markup Language (CFML), Microsoft Active Server Pages (ASP), and JavaServer Pages (JSP). Macromedia Roundtrip HTML technology imports your hand-coded HTML documents without reformatting the code; you can then reformat code with your preferred formatting style.
Dreamweaver also enables you to build dynamic database-backed web applications using server technologies such as CFML,ASP.NET, ASP, JSP, and PHP.
Dreamweaver is fully customizable. You can create your own objects and commands, modify keyboard shortcuts, and even write JavaScript code to extend Dreamweaver capabilities with new behaviors, Property inspectors, and site reports.
The Dreamweaver accessibility validation feature
The accessibility validation feature in Dreamweaver MX uses technology from UsableNet. UsableNet is an industry leader in developing easy-to-use software to automate usability and accessibility testing and repair. For additional assistance with accessibility testing, try the UsableNet LIFT for Macromedia Dreamweaver, a complete solution for developing usable and accessible websites. UsableNet Lift for Macromedia Dreamweaver includes fix wizards for complex tables, forms, and images; a global ALT editor; customizable reporting; and a new active monitoring mode that ensures content is accessible as pages are being built. Request a demo of Lift for Macromedia Dreamweaver at www.usablenet.com.



Friday, June 13, 2008

ECommerce



ECommerce

Have you ever thought about working an eCommerce business? Would you like to make money from your own home? The impact of young consumers in the on line business community is visible in the way they share product recommendations over the internet. The way eCommerce is changing, growing and improving every day is remarkable. What is eCommerce? eCommerce stands for electronic commerce which is used to describe doing business over the internet; the means of buying and selling goods on the Internet using web pages. That is something most anyone could do if they put there mind into it.

Electronics has gained considerable attention with in home business opportunities over the last few years. And, there is no doubt that developments in wireless technology will have a great influence on the way in home business expands in the future. The field of digital electronics is exciting, fast moving, and constantly changing each and every day; even dashboard DVD players seem quaint and may even become obsolete when you consider how fast electronics research and development is moving in our world today.

I decided to start my on line business because I wanted to be at home with my family. It is important to me to be with my children, but my working is a necessity. I knew I had to do something that would allow me to both stay at home and still have the money coming in to raise my children. That�s when I decided to try an online business of my own. I must admit the hours it took to start this business was stressful but in the end it has been well worth the time. Because I have succeeded I know now that if a person will put a little time and effort into this kind of business the rewards will be great.

Notebook



Notebook Buying Tips?



Why notebook became so popular? It has been estimated that notebook sales has increased an average of 20% per year in the United States alone. Among the many advantages it offers, portability is one of the main reason people end up making a purchase of it. However before any purchase is made, other features should be considered as well.
Notebook was first made available in the early eighties. Although much heavier and bulkier than today's notebooks, it had the unique portability feature that put this innovative product in a class by itself. Although not much of a commercial success then, it gave the computer industry a goal to pursue in manufacturing this item with better weight, size and performance ratio and making one of today's most wanted computer hardware.
Notebook size has got much smaller, however big enough to make one feel very comfortable in handling and operating computer related tasks. One can find it in sizes best categorized as: 1 - Tablet Pc has the size of paper tablet and weighs no more than 4 pounds; 2 - Ultra Portable is a little bigger than Tablet Pc and weighs around 4 pounds, no internal CD or DVD drive, display of 12 inches or smaller; 3 - Thin and Light is a mid-size notebook, 10-14" x 10", 1 to 1.5 thick, and weighs around 7 pounds, wireless network capability, 14-inch displays, combo CD-RW/DVD; 4 - Desktop replacement is the largest category of notebooks, more than 12 x 10" and weighs more than 7 pounds, 15-17 inches displays or larger, wireless network capability, combo CD-RW/DV.
Also an important feature one should look for is performance. Notebooks provide very close performance in comparison with traditional desktop computers, and should handle all computer related tasks with great ease. Whenever purchasing a notebook, make sure it has the latest cpu model, large ram memory and hard disk space. Notebook performance is directly related with cpu clock, ram memory and hard disk space. For these items, big is never enough.
Another feature one should look for is the dvd player. It can come in handy for entertainment purposes, enabling one to watch movies while traveling. Wireless connection is also a feature to look for in a notebook. Some notebooks feature an infrared port, which can be used to connect a mobile phone. Also there are other wireless technologies such as Bluetooth and Wi-Fi, which allows mobile phone, printers and PDA to be connected at certified public and private network. The ability to have a mobile connection is definitely a plus in today's connected world.
Expansion capability for notebooks can be done thru the use of plug-in pc cards. Although there is a new standard called ExpressCard, which is smaller and faster plug-in card that provides more features for multimedia tasks.
Notebooks have certainly become an item required for one's mobile computer related tasks, whether it is used for public, private, personal or professional purposes. Its portability and small size make an attractive all around computer hardware item. For those looking for mobile computer hardware, notebooks can certainly become a good solution at affordable prices.
Roberto Sedycias
IT Consultant

HardWare Technologi



The Super Duper Problem Fixer


by: Ray geide

One of our customers pointed out a new program to me and wanted me to check it out. This program called itself a bug fixer. It was a sharp looking program and claimed to fix bugs on the user's computer that he didn't even know existed.

It sounded like a super duper problem solver until I downloaded it and took a closer look. Being a programmer I quickly saw behind its smoke and mirrors. It actually only performed six of the over 1000 cleaning processes which our A1Click Ultra PC Cleaner and RegVac Registry Cleaner do.

Even though it did little compared to our programs, it found 504 problems. How can that be? My computer was clean. The program did not show any details about the results but wanted $30 before it would clean them. I'll never know for sure about those results, but I suspect that they were fabricated and that the true number of problems was 0.

There are many shady developers out there that just want to make a quick buck. I doubt this bug fixer program will even be around in a year.

This provides a good lesson to anyone. Be sure to purchase software from a trusted developer and don't buy a program just because it looks nice.

We have been in the software business since 1996 and are continually improving our programs. You will not hear hype and lies from us. Our programs may not look that good on the surface, but under the hood they are super. When you purchase our programs, all future updates are free.

If you haven't tried RegVac Registry Cleaner and A1Click Ultra PC Cleaner, try them today.

Thursday, June 12, 2008

Markov

Markov

Markov Random Fields and Images
by
Patrick P_erez

At the intersection of statistical physics and probability theory, Markov random_elds and Gibbs distributions have emerged in the early eighties as powerful tools for modeling images and coping with high-dimensional inverse problems from low-level vision. Since then, they have been used in many studies from the image processing and computer vision community. A brief and simple introduction to the basics of the domain is proposed.

1. Introduction and general framework
With a seminal paper by Geman and Geman in 1984 [18], powerful tools known for long by physicists [2] and statisticians [3] were brought in a com-prehensive and stimulating way to the knowledge of the image processing and computer vision community. Since then, their theoretical richness, their prac-tical versatility, and a number of fruitful connections with other domains, have resulted in a profusion of studies. These studies deal either with the mod-eling of images (for synthesis, recognition or compression purposes) or with the resolution of various high-dimensional inverse problems from early vision (e.g., restoration, deblurring, classi_cation, segmentation, data fusion, surface reconstruction, optical ow estimation, stereo matching, etc. See collections of examples in [11, 30, 40]).
The implicit assumption behind probabilistic approaches to image analysis is that, for a given problem, there exists a probability distribution that can capture to some extent the variability and the interactions of the di_erent sets of relevant image attributes. Consequently, one considers the variables of the problem as random variables forming a set (or random vector) X = (Xi)ni=1 with joint probability distribution PX 1.
1 PX is actually a probability mass in the case of discrete variables, and a probability density
function when the Xi's are continuously valued. In the latter case, all summations over
states or con_gurations should be replaced by integrals.

Tuesday, June 10, 2008

Logic Of Triplet Markov Fields

Unsupervised image segmentation using triplet Markov fields
by
Dalila Benboudjema, Wojciech Pieczynski

Abstract
Hidden Markov fields (HMF) models are widely applied to various problems arising in
image processing. In these models, the hidden process of interest X is a Markov field and must be estimated from its observable noisy version Y. The success of HMF is mainly due to the fact that the conditional probability distribution of the hidden process with respect to the observed one remains Markovian, which facilitates different processing strategies such as Bayesian restoration. HMF have been recently generalized to ‘‘pairwise’’ Markov fields (PMF), which offer similar processing advantages and superior modeling capabilities. In PMF one directly assumes the Markovianity of the pair (X,Y). Afterwards, ‘‘triplet’’ Markov fields (TMF), in which the distribution of the pair (X,Y) is the marginal distribution of a Markov field (X,U,Y), where U is an auxiliary process, have been proposed and still allow restoration processing. The aim of this paper is to propose a new parameter estimation method adapted to TMF, and to study the corresponding unsupervised image segmentation methods. The latter are validated via experiments and real image processing.
@ 2005 Elsevier Inc. All rights reserved.

Keywords: Hidden Markov fields; Pairwise Markov fields; Triplet Markov fields; Bayesian classification; Mixture estimation; Iterative conditional estimation; Stochastic gradient; Unsupervised image segmentation

1. Introduction
Hidden Markov fields (HMF) are widely used in solving various problems, comprising two stochastic processes X = (Xs)s2S and Y = (Ys)s2S, in which X = x is unobservable and must be estimated from the observed Y = y. This wide use is due to the fact that standard Bayesian restoration methods can be used in spite of the large size of S: see [3,12,19] for seminal papers and [14,33], among others, for general books. The qualifier ‘‘hidden Markov’’ means that the hidden process X has a Markov law. When the distributions p (y|x) of Y conditional on X = x are simple enough, the pair (X,Y) then retains the Markovian structure, and likewise for the distribution p(x|y) of X conditional on Y = y. The Markovianity of p(x|y) is crucial because it allows one to estimate the unobservable X = x from the observed Y = y, even in the case of very rich sets S. However, the simplicity of p (y|x) required in standard HMF to ensure the Markovianity of p(x|y) can pose problems; in particular, such situations occur in textured images segmentation [21]. To remedy this, the use of pairwise Markov fields (PMF), in which one directly assumes the Markovianity of (X,Y), has been discussed in [26]. Both p(y|x) and p(x|y) are then Markovian, the former ensuring possibilities of modeling textures without approximations, and the latter allowing Bayesian processing, similar to those provided by HMF. PMF have then been generalized to ‘‘triplet’’ Markov fields (TMF), in which the distribution of the pair Z = (X,Y) is the marginal distribution of a Markov field T = (X,U,Y), where U = (Us)s2S is an auxiliary random field [27]. Once the space K of possible values of each Us is simple enough, TMF still allow one to estimate the unobservable X = x from the observed Y = y. Given that in TMF T = (X,U,Y) the distribution of Z = (X,Y) is its marginal distribution, the Markovianity of T does not necessarily imply the Markovianity of Z; and thus a TMF model is not necessarily a PMF one. Therefore, TMF are more general than PMF and thus are likely to be able to model more complex situations. Conversely, a PMF model can be seen as a particular TMF model in which X = U. There are some studies concerning triplet Markov chains [18,28], where general ideas somewhat similar to those discussed in the present paper, have been investigated. However, as Markov fields based processing is quite different from the Markov chains based one, we will concentrate here on Markov fields with no further reference to Markov chains.