I choose this article because for working on analysis concept of CPU,Processor,Core and Thread should be clear.
Now a days CPU and Processor are almost same thing.
A dual-core CPU is like having two CPUs inside one chip. But, they both have to access the motherboard resources through the one set of pins.
Now a days CPU and Processor are almost same thing.
A dual-core CPU is like having two CPUs inside one chip. But, they both have to access the motherboard resources through the one set of pins.
A processor "core" is a physical processing unit on the die (the silicon wafer - the actual chip). Older CPUs have only one core per chip. For these, to get two processing units (cores) you must have a motherboard with two separate CPU sockets. With two physical CPUs, communication between the CPUs has to go out one CPU socket, across the motherboard support circuitry, and in through the socket of the second CPU. This is considerably slower than the speed at which things take place inside the circuitry on the same chip. So to increase processing speed, and to lower manufacturing and end-user costs, individual CPUs were designed to have more than one processing unit (cores) on the chip. So a 2-core CPU is very much like having two separate CPUs but is less expensive and can often be faster than two single-core CPUs of the same capability because of the increased communication speed between them, and because they can share common circuitry such as a cache.
How to find number of Prrocessors and Number of Core.
Type msinfo32.exe at Run command prompt. Under System Summary-->Processor tab we can see number of core and Logical Processors.
Type Systeminfo at command prompt to get the details of number of Processors under
Processor<s>: 1 Processor<s> installed
Processor<s>: 2 Processor<s> installed
If hyper threading is enables, this will give the number of logical processors = 2* Number of Physical core
In Inter Processor HyperThreading concept is enabled.In AMD processor there is no hyperthreading.
Number of logical processors = Number of Physical core (Only for AMD Processor)
In the case of intel hyperthreading (HT), you have two logical cores per physical core, so a quad-(physical) core i7 processor will have eight logical cores. However the two logical cores within one physical core cannot truly operate in parallel with respect to each other. This is because HT works by having one logical core operate while the other logical core is waiting and has nothing to do (for example when it is waiting on a cache or memory fetch).
Well then how can these logical cores be considered in parallel? Well most of the time they can be because during typical CPU operation you will almost never see continuous execution of a single thread on every clock cycle - there are always gaps when one logical core is waiting for something and the second logical core can kick in and do its job.
we can also see all logical core under Task Manager-->Performance: CPU Usage
How to find number of Prrocessors and Number of Core.
Type msinfo32.exe at Run command prompt. Under System Summary-->Processor tab we can see number of core and Logical Processors.
Type Systeminfo at command prompt to get the details of number of Processors under
Processor<s>: 1 Processor<s> installed
Processor<s>: 2 Processor<s> installed
If hyper threading is enables, this will give the number of logical processors = 2* Number of Physical core
In Inter Processor HyperThreading concept is enabled.In AMD processor there is no hyperthreading.
Number of logical processors = Number of Physical core (Only for AMD Processor)
In the case of intel hyperthreading (HT), you have two logical cores per physical core, so a quad-(physical) core i7 processor will have eight logical cores. However the two logical cores within one physical core cannot truly operate in parallel with respect to each other. This is because HT works by having one logical core operate while the other logical core is waiting and has nothing to do (for example when it is waiting on a cache or memory fetch).
Well then how can these logical cores be considered in parallel? Well most of the time they can be because during typical CPU operation you will almost never see continuous execution of a single thread on every clock cycle - there are always gaps when one logical core is waiting for something and the second logical core can kick in and do its job.
we can also see all logical core under Task Manager-->Performance: CPU Usage
Thread
In a simplistic view, a thread (a sequence of steps to be executed) is constructed in a "pipeline" and then "scheduled" for execution by a CPU core. Once a thread is scheduled, the CPU core is executing the pipelined instructions. Frequently, while the thread is executing, the CPU needs more information than just the series of instructions: it needs data. These data values may be only a few nanoseconds away in some RAM memory location or they may be several thousand nanoseconds away (milliseconds) on a disk drive. When a core has to stop executing the thread while it waits to fetch the external data, time is lost. No other thread can be executed while the waiting thread is scheduled on that core (the thread is given an allotment of time and not kicked out early).
This is similar to a single lane bridge. Only one car can use the bridge at a time. If a driver stops to take a scenic picture, no other car can use the bridge until the driver gets his picture and moves off the bridge. To prevent complete closure of the core, the CPU has a mechanism to swap an entire thread off of the core if it experiences a serious problem (like a car with a breakdown), but that is a very costly process and is not used if the thread is just waiting for I/O to complete so that it may continue executing. Like the car analogy, forcing a hung thread out of a core prematurely is like waiting for a tow truck to get the broken down car off the bridge. It takes quite a while, but it is still quicker than repairing the car on the bridge.
A multi-threaded core is like a bridge that has a passing lane. When the driver on the bridge stops to take a picture, the car behind him can still use the bridge by passing the stopped car using the passing lane. Think of it as two different pipelines where thread executions are constructed. Still only one can be scheduled to a core at a time. But if the executing thread is waiting to fetch I/O, the other thread can jump in the core and get a little CPU time in while the thread assigned to the core is waiting.
This allows what may look like two cores (two pipelines executing at the same time). BUT IT IS NOT. Still only one thread at a time can be executed by the core at a time. This just allows another thread to execute during the waiting period of the first thread. Depending upon specific application design, data needs, I/O, etc, multithreading can actually decrease performance or may increase performance up to about 40% (sited from Intel and Microsoft sources).
Intel CPUs support multithreading, but only two threads per CPU. AMD CPUs do not support multithreading
Web Logic Thread details:
Execute Thread Total Count: This is the total number of threads “created” from the Weblogic self-tuning pool and visible from the JVM Thread Dump. This value correspond to the sum of: Active + Standby threads
Active Execute Threads: This is the number of threads “eligible” to process a request. When thread demand goes up, Weblogic will start promoting threads from Standby to Active state which will enable them to process future client requests
Standby Thread Count: This is the number of threads waiting to be marked “eligible” to process client requests. These threads are created and visible from the JVM Thread Dump but not available yet to process a client request
Execute Thread Idle Count: This is the number of Active threads currently “available” to process a client request
Hogging Thread Count: This is the number of threads taking much more time than the current execution time in average calculated by the Weblogic kernel.
In the above snapshots, we have:
Total of 43 threads, 29 in Standby state and 14 in Active state
Out of the 14 Active threads, we have 1 Hogging thread and 7 Idle threads e.g. 7 threads “available” for request processing
Another way to see the situation: we have a total of 7 threads currently “processing” client request with 1 out of 7 in Hogging state (e.g. taking more time than current calculated average)
If the number of HoggingThreadCount increases then the server health is in dangerous. That time you can take the ThreadDump
Clustered
A server cluster is a group of independent servers running Windows Server 2003, Enterprise Edition, or Windows Server 2003, Datacenter Edition, and working together as a single system to provide high availability of services for clients. When a failure occurs on one computer in a cluster, resources are redirected and the workload is redistributed to another computer in the cluster. You can use server clusters to ensure that users have constant access to important server-based resources.
Advantage:
This is similar to a single lane bridge. Only one car can use the bridge at a time. If a driver stops to take a scenic picture, no other car can use the bridge until the driver gets his picture and moves off the bridge. To prevent complete closure of the core, the CPU has a mechanism to swap an entire thread off of the core if it experiences a serious problem (like a car with a breakdown), but that is a very costly process and is not used if the thread is just waiting for I/O to complete so that it may continue executing. Like the car analogy, forcing a hung thread out of a core prematurely is like waiting for a tow truck to get the broken down car off the bridge. It takes quite a while, but it is still quicker than repairing the car on the bridge.
A multi-threaded core is like a bridge that has a passing lane. When the driver on the bridge stops to take a picture, the car behind him can still use the bridge by passing the stopped car using the passing lane. Think of it as two different pipelines where thread executions are constructed. Still only one can be scheduled to a core at a time. But if the executing thread is waiting to fetch I/O, the other thread can jump in the core and get a little CPU time in while the thread assigned to the core is waiting.
This allows what may look like two cores (two pipelines executing at the same time). BUT IT IS NOT. Still only one thread at a time can be executed by the core at a time. This just allows another thread to execute during the waiting period of the first thread. Depending upon specific application design, data needs, I/O, etc, multithreading can actually decrease performance or may increase performance up to about 40% (sited from Intel and Microsoft sources).
Intel CPUs support multithreading, but only two threads per CPU. AMD CPUs do not support multithreading
Web Logic Thread details:
Execute Thread Total Count: This is the total number of threads “created” from the Weblogic self-tuning pool and visible from the JVM Thread Dump. This value correspond to the sum of: Active + Standby threads
Active Execute Threads: This is the number of threads “eligible” to process a request. When thread demand goes up, Weblogic will start promoting threads from Standby to Active state which will enable them to process future client requests
Standby Thread Count: This is the number of threads waiting to be marked “eligible” to process client requests. These threads are created and visible from the JVM Thread Dump but not available yet to process a client request
Execute Thread Idle Count: This is the number of Active threads currently “available” to process a client request
Hogging Thread Count: This is the number of threads taking much more time than the current execution time in average calculated by the Weblogic kernel.
In the above snapshots, we have:
Total of 43 threads, 29 in Standby state and 14 in Active state
Out of the 14 Active threads, we have 1 Hogging thread and 7 Idle threads e.g. 7 threads “available” for request processing
Another way to see the situation: we have a total of 7 threads currently “processing” client request with 1 out of 7 in Hogging state (e.g. taking more time than current calculated average)
If the number of HoggingThreadCount increases then the server health is in dangerous. That time you can take the ThreadDump
Clustered
A server cluster is a group of independent servers running Windows Server 2003, Enterprise Edition, or Windows Server 2003, Datacenter Edition, and working together as a single system to provide high availability of services for clients. When a failure occurs on one computer in a cluster, resources are redirected and the workload is redistributed to another computer in the cluster. You can use server clusters to ensure that users have constant access to important server-based resources.
Advantage:
Application and service failures, which affect application software and essential services.
System and hardware failures, which affect hardware components such as CPUs, drives, memory, network adapters, and power supplies.
Site failures in multisite organizations, which can be caused by natural disasters, power outages, or connectivity outages.
Single Quorum Device Cluster
The most widely used cluster type is the single quorum device cluster, also called the standard quorum cluster. In this type of cluster there are multiple nodes with one or more cluster disk arrays, also called the cluster storage, and a connection device, that is, a bus. Each disk in the array is owned and managed by only one server at a time. The disk array also contains the quorum resource. The following figure illustrates a single quorum device cluster with one cluster disk array.
Single Quorum Device Cluster
Because single quorum device clusters are the most widely used cluster, this Technical Reference focuses on this type of cluster.