#Why Are OSPF Neighbors Stuck in Exstart/Exchange State?
The problem occurs most frequently when attempting to run OSPF between a Cisco router and another vendor's router. The problem occurs when the maximum transmission unit (MTU) settings for neighboring router interfaces don't match. If the router with the higher MTU sends a packet larger that the MTU set on the neighboring router, the neighboring router ignores the packet.
http://www.cisco.com/en/US/tech/tk365/technologies_tech_note09186a0080093f0d.shtml
#OSPF Neighbor Problems Explained
http://www.cisco.com/en/US/tech/tk365/technologies_tech_note09186a0080094050.shtml?referring_site=bodynav
#What is the difference between a thread and a process?
Processes and threads are both fundamental concepts in operating systems and are used to execute tasks concurrently. However, they differ in several key aspects:
1. **Definition**:
- **Process**: A process is an instance of a running program. It consists of the program code, associated data, and resources such as memory, file descriptors, and system resources. Each process has its own memory space and runs independently of other processes.
- **Thread**: A thread is a lightweight unit of execution within a process. Threads share the same memory space and resources as the process that created them. Multiple threads within the same process can run concurrently and share resources such as memory, file descriptors, and I/O operations.
2. **Resource Usage**:
- **Process**: Processes are heavy-weight entities that require their own memory space, address space, and resources. Each process has its own memory allocation, file descriptors, and other system resources.
- **Thread**: Threads are light-weight entities that share the same memory space and resources within a process. Threads within the same process can communicate and share data directly without the need for inter-process communication mechanisms.
3. **Concurrency**:
- **Process**: Processes are independent units of execution and run concurrently with other processes on the system. Inter-process communication mechanisms such as pipes, sockets, and shared memory are used to facilitate communication between processes.
- **Thread**: Threads within the same process share the same memory space and resources and can execute concurrently. Threads can communicate and share data directly without the need for inter-process communication mechanisms.
4. **Creation Overhead**:
- **Process**: Creating a new process involves significant overhead, including memory allocation, copying the process image, and setting up process-specific resources. As a result, creating a new process is relatively expensive.
- **Thread**: Creating a new thread within a process is relatively lightweight compared to creating a new process. Threads within the same process share the same memory space and resources, so creating a new thread involves minimal overhead.
5. **Isolation**:
- **Process**: Processes are isolated from each other and run independently. Each process has its own memory space and resources, which provides a level of isolation and protection.
- **Thread**: Threads within the same process share the same memory space and resources and can access each other's data directly. There is no inherent isolation between threads within the same process.
In summary, processes and threads are both used for concurrent execution in operating systems, but they differ in terms of resource usage, concurrency model, creation overhead, isolation, and communication mechanisms. Processes provide stronger isolation between tasks, while threads provide lightweight concurrency within the same process.
A process is an executing instance of an application. What does that mean? Well, for example, when you double-click the Microsoft Word icon, you start a process that runs Word. A thread is a path of execution within a process. Also, a process can contain multiple threads. When you start Word, the operating system creates a process and begins executing the primary thread of that process.
It’s important to note that a thread can do anything a process can do. But since a process can consist of multiple threads, a thread could be considered a ‘lightweight’ process. Thus, the essential difference between a thread and a process is the work that each one is used to accomplish. Threads are used for small tasks, whereas processes are used for more ‘heavyweight’ tasks – basically the execution of applications.
Another difference between a thread and a process is that threads within the same process share the same address space, whereas different processes do not. This allows threads to read from and write to the same data structures and variables, and also facilitates communication between threads. Communication between processes – also known as IPC, or inter-process communication – is quite difficult and resource-intensive.
MultiThreading
Threads, of course, allow for multi-threading. A common example of the advantage of multithreading is the fact that you can have a word processor that prints a document using a background thread, but at the same time another thread is running that accepts user input, so that you can type up a new document.
If we were dealing with an application that uses only one thread, then the application would only be able to do one thing at a time – so printing and responding to user input at the same time would not be possible in a single threaded application.
Each process has it’s own address space, but the threads within the same process share that address space. Threads also share any other resources within that process. This means that it’s very easy to share data amongst threads, but it’s also easy for the threads to step on each other, which can lead to bad things.
Multithreaded programs must be carefully programmed to prevent those bad things from happening. Sections of code that modify data structures shared by multiple threads are called critical sections. When a critical section is running in one thread it’s extremely important that no other thread be allowed into that critical section. This is called synchronization, which we won't get into any further over here. But, the point is that multithreading requires careful programming.
Also, context switching between threads is generally less expensive than in processes. And finally, the overhead (the cost of communication) between threads is very low relative to processes.
Here’s a summary of the differences between threads and processes:
1. Threads are easier to create than processes since they don't require a separate address space.
2. Multithreading requires careful programming since threads share data structures that should only be modified by one thread at a time. Unlike threads, processes don't share the same address space.
3. Threads are considered lightweight because they use fewer resources than processes.
4. Processes are independent of each other. Threads, since they share the same address space are interdependent, so caution must be taken so that different threads don't step on each other.
This is really another way of stating #2 above.
5. A process can consist of multiple threads.
#Are MAC addresses only for devices with an ethernet interface?
No, this is a popular misconception. Even iPhones – which have no Ethernet interface – still have (and need) a MAC address.
MAC (Media Access Control) addresses are typically associated with devices that have Ethernet interfaces, as Ethernet is the most common technology that uses MAC addresses for addressing at the data link layer (Layer 2) of the OSI model. However, MAC addresses are not exclusive to Ethernet interfaces.
While Ethernet interfaces are the primary users of MAC addresses, other network technologies also utilize MAC addresses for addressing. Some examples include:
1. **Wi-Fi (IEEE 802.11)**: Wi-Fi devices also have MAC addresses associated with their wireless network interfaces. Wi-Fi frames use MAC addresses for addressing within the local wireless network.
2. **Bluetooth**: Bluetooth devices use Bluetooth MAC addresses for communication within the Bluetooth network. Bluetooth MAC addresses are used for addressing devices participating in Bluetooth connections.
3. **Token Ring**: Token Ring networks also utilize MAC addresses for addressing devices within the Token Ring network. Each device on a Token Ring network has a unique MAC address.
4. **Fiber Channel**: Fiber Channel networks use MAC addresses known as World Wide Port Names (WWPNs) for addressing devices within the Fiber Channel fabric.
While Ethernet interfaces are the most common devices associated with MAC addresses, other network technologies also use MAC addresses for addressing and identification purposes. The structure and format of MAC addresses are standardized across these different network technologies to ensure interoperability and compatibility.
#Puzzles:
http://programmerinterview.com/index.php/puzzles/introduction
#What is a virtual memory, how is it implemented, and why do operating systems use it?
Real, or physical, memory exists on RAM chips inside the computer. Virtual memory, as its name suggests, doesn’t physically exist on a memory chip. It is an optimization technique and is implemented by the operating system in order to give an application program the impression that it has more memory than actually exists. Virtual memory is implemented by various operating systems such as Windows, Mac OS X, and Linux.
So how does virtual memory work? Let’s say that an operating system needs 120 MB of memory in order to hold all the running programs, but there’s currently only 50 MB of available physical memory stored on the RAM chips. The operating system will then set up 120 MB of virtual memory, and will use a program called the virtual memory manager (VMM) to manage that 120 MB. The VMM will create a file on the hard disk that is 70 MB (120 – 50) in size to account for the extra memory that’s needed. The O.S. will now proceed to address memory as if there were actually 120 MB of real memory stored on the RAM, even though there’s really only 50 MB. So, to the O.S., it now appears as if the full 120 MB actually exists. It is the responsibility of the VMM to deal with the fact that there is only 50 MB of real memory.
#Memory corruption
Memory corruption refers to the unintended modification of data stored in computer memory. It occurs when a program writes to an area of memory that it does not have permission to access or modifies memory locations beyond the intended boundaries of data structures. Memory corruption can lead to various issues, including program crashes, data corruption, security vulnerabilities, and unpredictable behavior.
There are several common causes of memory corruption:
1. **Buffer Overflows**: A buffer overflow occurs when a program writes more data to a buffer (an allocated block of memory) than it can hold. This can overwrite adjacent memory locations, leading to memory corruption.
2. **Use-after-Free**: Use-after-free is a type of memory corruption that occurs when a program accesses memory that has been deallocated (freed). This can happen when a program continues to use a pointer to memory that has already been released, leading to unpredictable behavior.
3. **Dangling Pointers**: Dangling pointers occur when a program dereferences a pointer that points to invalid memory, typically because the memory has been deallocated or the pointer has not been properly initialized.
4. **Heap Corruption**: Heap corruption occurs when a program improperly manipulates memory allocated on the heap (dynamically allocated memory). This can happen due to bugs in memory allocation/deallocation routines or incorrect usage of pointers.
5. **Stack Smashing**: Stack smashing, also known as stack buffer overflow, occurs when a program writes beyond the bounds of a stack-allocated buffer. This can overwrite the return address, function pointers, or other stack metadata, potentially leading to code execution vulnerabilities.
Memory corruption can have serious consequences, including crashes, data loss, security vulnerabilities (such as buffer overflow exploits), and system instability. Detecting and fixing memory corruption issues is essential for maintaining the reliability, security, and integrity of software systems. Techniques such as bounds checking, memory sanitization, and static/dynamic analysis tools can help identify and mitigate memory corruption vulnerabilities.
Memory corruption occurs in a computer program when the contents of a memory location are unintentionally modified due to programming errors; this is termed violating memory safety. When the corrupted memory contents are used later in that program, it leads either to program crash or to strange and bizarre program behavior. Nearly 10% of application crashes on Windows systems are due to heap corruption.
Modern programming languages like C and C++ have powerful features of explicit memory management and pointer arithmetic. These features are designed for developing efficient applications and system software. However, using these features incorrectly may lead to memory corruption errors.
Memory corruption is one of the most intractable class of programming errors, for two reasons:
The source of the memory corruption and its manifestation may be far apart, making it hard to correlate the cause and the effect.
Symptoms appear under unusual conditions, making it hard to consistently reproduce the error.
Memory corruption errors can be broadly classified into four categories:
Using uninitialized memory: Contents of uninitialized memory are treated as garbage values. Using such values can lead to unpredictable program behavior.
Using un-owned memory: It is common to use pointers to access and modify memory. If such a pointer is a null pointer, dangling pointer (pointing to memory that has already been freed), or to a memory location outside of current stack or heap bounds, it is referring to memory that is not then possessed by the program. Using such pointers is a serious programming flaw. Accessing such memory usually causes operating system exceptions, which most commonly lead to a program crash. Strictly-speaking, if the memory access is a READ the issue may not be considered corruption because the memory is not modified.
Using beyond allocated memory (buffer overflow): If an array is used in a loop, with incorrect terminating condition, memory beyond the array bounds may be manipulated. Buffer overflow is one of the most common programming flaws exploited by computer viruses causing serious computer security issues (e.g. return-to-libc attack, stack-smashing protection) in widely used programs. One can also incorrectly access the memory before the beginning of a buffer.
Faulty heap memory management: Memory leaks and freeing non-heap or un-allocated memory are the most frequent errors caused by faulty heap memory management.
#SNMP agent error codes:
The agent reports that no errors occurred during transmission.
The agent could not place the results of the requested SNMP operation in a single SNMP message.
The requested SNMP operation identified an unknown variable.
The requested SNMP operation tried to change a variable but it specified either a syntax or value error.
The requested SNMP operation tried to change a variable that was not allowed to change, according to the community profile of the variable.
The specified SNMP variable is not accessible.
The value specifies a type that is inconsistent with the type required for the variable.
The variable does not exist, and the agent cannot create it.
Assigning the value to the variable requires allocation of resources that are currently unavailable.
An authorization error occurred.
The variable exists but the agent cannot modify it.
#SNMP engine ID
The SNMP engine ID is a unique string used to identify the device for administration purposes. You do not need to specify an engine ID for the device; a default string is generated using Cisco's enterprise number (1.3.6.1.4.1.9) and the mac address of the first interface on the device.
#Explain LAG. Why it is not supported on a half duplex port. What is a static LAG.
LACP does not support half-duplex mode. Half-duplex ports in LACP port channels are put in the suspended state.With a static link aggregate, all configuration settings will be setup on both participating LAG components.
#Layer 2 switches and bridges are faster than routers because they don’t take up time looking at the
network layer header information. Instead, they look at the frame’s hardware addresses before deciding to either forward, flood or drop the frame.
#There can be only one spanning-tree instance per bridge, while switches can have many.
#No data will be forwarded until convergence is complete.
#Following the type/length field is the actual data contained in the frame. After physical-layer and
link-layer processing is complete, this data will eventually be sent to an upper-layer protocol. In the
case of Ethernet, the upper-layer protocol is identified in the type field. In the case of IEEE 802.3,
the upper-layer protocol must be defined within the data portion of the frame, if at all. If data in the
frame is insufficient to fill the frame to its minimum 64-byte size, padding bytes are inserted to
ensure at least a 64-byte frame.
#Conversion
1 Bit = Binary Digit
8 Bits = 1 Byte
1024 Bytes = 1 Kilobyte
1024 Kilobytes = 1 Megabyte
1024 Megabytes = 1 Gigabyte
1024 Gigabytes = 1 Terabyte
1024 Terabytes = 1 Petabyte
1024 Petabytes = 1 Exabyte
1024 Exabytes = 1 Zettabyte
1024 Zettabytes = 1 Yottabyte
1024Yottabytes = 1 Brontobyte
1024 Brontobytes = 1 Geopbyte
1024 Geopbyte=1 Saganbyte
1024 Saganbyte=1 Pijabyte
Alphabyte = 1024 Pijabyte
Kryatbyte = 1024 Alphabyte
Amosbyte = 1024 Kryatbyte
Pectrolbyte = 1024 Amosbyte
Bolgerbyte = 1024 Pectrolbyte
Sambobyte = 1024 Bolgerbyte
Quesabyte = 1024 Sambobyte
Kinsabyte = 1024 Quesabyte
Rutherbyte = 1024 Kinsabyte
Dubnibyte = 1024 Rutherbyte
Seaborgbyte = 1024 Dubnibyte
Bohrbyte = 1024 Seaborgbyte
Hassiubyte = 1024 Bohrbyte
Meitnerbyte = 1024 Hassiubyte
Darmstadbyte = 1024 Meitnerbyte
Roentbyte = 1024 Darmstadbyte
Coperbyte = 1024 Roentbyte...
#What if I configure the administrative distance to be the same for two routing protocols? Will the router install routes from each routing protocol and allow me to load balance traffic?
http://bradhedlund.com/2007/12/31/two-routing-protocols-same-administrative-distance/
#What are the benefits of subnetting?
Subnetting helps reduce network traffic and the size of the routing tables. It’s also a way to add security to network traffic by isolating it from the rest of the network.
Q. What does the EIGRP stuck in active message mean?
A. When EIGRP returns a stuck in active (SIA) message
The route reported by the SIA has gone away.
An EIGRP neighbor (or neighbors) have not replied to the query for that route.
When the SIA occurs, the router clears the neighbor that did not reply to the query
#ospf area test and convergence
http://rekrowten.wordpress.com/2012/03/05/crazy-big-ospf-area-test-with-24-routers/
#adjacency issue in ospf
http://rekrowten.wordpress.com/2012/03/26/ospf-mtu-adjacency-establishment-problem/
#decision rules of route map
http://rekrowten.wordpress.com/2012/05/23/decision-rules-of-route-map/
#network layer convergence
http://rekrowten.wordpress.com/2012/07/16/convergence-at-network-layer/
#convergence of stp and rstp
http://rekrowten.wordpress.com/2012/07/23/convergence-of-stp-rstp-and-mst-part-2/
#convergence of HSRP,VRRP,GLBP
http://rekrowten.wordpress.com/2012/07/30/convergence-of-hsrp-vrrp-and-glbp-part-3/
#RIP convergence
http://rekrowten.wordpress.com/2012/08/13/convergence-of-protocol-rip-part-5/
#OSPF convergence
http://rekrowten.wordpress.com/2012/08/20/convergence-of-protocol-ospf-part-6/
#EIGRP convergence
http://rekrowten.wordpress.com/2012/09/03/convergence-of-protocol-eigrp-part-8/
#BGP multiple paths
http://rekrowten.wordpress.com/2013/06/14/bgp-multiple-best-paths/
No comments:
Post a Comment