That computer is used strictly for testing and development only. Azure services, software, and support. NET developers to test, deploy, and manage great applications across platforms and devices. VS couldn't be installed on win NET Framework 3. Other versions. NET Framework 4. I learned to create videos on Windows Movie Maker. I did most of my high school homework on Microsoft Word , when I didn't have to! Deleting the First Node in a Circular List The first node can be deleted by simply replacing the next field of the tail node with the next field of the first node.
Tail node is the previous node to the head node which we want to delete. Also, update the tail nodes next pointer to point to next node of head as shown below. Create a temporary node which will point to head. Applications of Circular List Circular linked lists are used in managing the computing resources of a computer. We can use circular lists for implementing stacks and queues.
That means elements in doubly linked list implementations consist of data, a pointer to the next node and a pointer to the previous node in the list as shown below. This implementation is based on pointer difference. Each node uses only one pointer field to traverse the list back and forth.
New Node Definition The ptrdiff pointer field contains the difference between the pointer to the next node and the pointer to the previous node. As an example, consider the following linked list. A memory-efficient implementation of a doubly linked list is possible with minimal compromising of timing efficiency.
However, it takes O n to search for an element in a linked list. There is a simple variation of the singly linked list called unrolled linked lists. An unrolled linked list stores multiple elements in each node let us call it a block for our convenience. In each block, a circular linked list is used to connect all nodes. Assume that there will be no more than n elements in the unrolled linked list at any time.
To simplify this problem, all blocks, except the last one, should contain exactly elements. Searching for an element in Unrolled Linked Lists In unrolled linked lists, we can find the kth element in O : 1.
Traverse the list of blocks to the one that contains the kth node, i. It takes O since we may find it by going through no more than blocks.
Find the k mod th node in the circular linked list of this block. It also takes O since there are no more than nodes in a single block. Suppose that we insert a node x after the ith node, and x should be placed in the jth block.
Nodes in the jth block and in the blocks after the jth block have to be shifted toward the tail of the list so that each of them still have nodes. In addition, a new block needs to be added to the tail if the last block of the list is out of space, i.
Performing Shift Operation Note that each shift operation, which includes removing a node from the tail of the circular linked list in a block and inserting a node to the head of the circular linked list in the block after, takes only O 1. The total time complexity of an insertion operation for unrolled linked lists is therefore O ; there are at most O blocks and therefore at most O shift operations. A temporary pointer is needed to store the tail of A.
In block A, move the next pointer of the head node to point to the second-to-last node, so that the tail node of A can be removed. Let the next pointer of the node, which will be shifted the tail node of A , point to the tail node of B. Let the next pointer of the head node of B point to the node temp points to. Finally, set the head pointer of B to point to the node temp points to.
Now the node temp points to becomes the new head node of B. We have completed the shift operation to move the original tail node of A to become the new head node of B. First, if the number of elements in each block is appropriately sized e. Comparing Linked Lists and Unrolled Linked Lists To compare the overhead for an unrolled list, elements in doubly linked list implementations consist of data, a pointer to the next node, and a pointer to the previous node in the list, as shown below.
Assuming we have 4 byte pointers, each node is going to take 8 bytes. But the allocation overhead for the node could be anywhere between 8 and 16 bytes. So, if we want to store IK items in this list, we are going to have 16KB of overhead. Thinking about our IK items from above, it would take about 4. Also, note that we can tune the array size to whatever gets us the best overhead for our application.
They work well when the elements are inserted in a random order. Some sequences of operations, such as inserting the elements in order, produce degenerate data structures that give very poor performance. If it were possible to randomly permute the list of items to be inserted, trees would work well with high probability for any input sequence. In most cases queries must be answered on-line, so randomly permuting the input is impractical.
Balanced tree algorithms re- arrange the tree as operations are performed to maintain certain balance conditions and assure good performance. Skip lists are a probabilistic alternative to balanced trees. Skip list is a data structure that can be used as an alternative to balanced binary trees refer to Trees chapter. As compared to a binary tree, skip lists allow quick search, insertion and deletion of elements. This is achieved by using probabilistic balancing rather than strictly enforce balancing.
It is basically a linked list with additional pointers such that intermediate nodes can be skipped. It uses a random number generator to make some decisions. In an ordinary sorted linked list, search, insert, and delete are in O n because the list must be scanned node-by-node from the head to find the relevant node. If somehow we could scan down the list in bigger steps skip down, as it were , we would reduce the cost of scanning. This is the fundamental idea behind Skip Lists.
The find, insert, and remove operations on ordinary binary search trees are efficient, O logn , when the input data is random; but less efficient, O n , when the input data is ordered. Skip List performance for these same operations and for any data set is about as good as that of randomly- built binary search trees - namely O logn. The nodes in a Skip List have many next references also called forward references. We speak of a Skip List node having levels, one level per forward reference.
The number of levels in a node is called the size of the node. In an ordinary sorted list, insert, remove, and find operations require sequential traversal of the list. This results in O n performance per operation. Skip Lists allow intermediate nodes in the list to be skipped during a traversal - resulting in an expected performance of O logn per operation.
Solution: Refer to Stacks chapter. Solution: Brute-Force Method: Start with the first node and count the number of nodes present after that node. Continue this until the numbers of nodes after current node are n — 1. Time Complexity: O n2 , for scanning the remaining list from current node for each node. Space Complexity: O 1. Solution: Yes, using hash table.
As an example consider the following list. That means, key is the position of the node in the list and value is the address of that node.
Position in List Address of Node 1 Address of 5 node 2 Address of 1 node 3 Address of 17 node 4 Address of 4 node By the time we traverse the complete list for creating the hash table , we can find the list length. Let us say the list length is M.
Space Complexity: Since we need to create a hash table of size m, O m. Solution: Yes. If we observe the Problem-3 solution, what we are actually doing is finding the size of the linked list. That means we are using the hash table to find the size of the linked list. We can find the length of the linked list just by starting at the head node and traversing the list. So, we can find the length of the list without creating the hash table. Hence, no need to create the hash table. Initially, both point to head node of the list.
From there both move forward until pTemp reaches the end of the list. As a result pNthNode points to nth node from the end of the linked list. Note: At any point of time both move one node at a time.
Solution: Brute-Force Approach. As an example, consider the following linked list which has a loop in it. The difference between this list and the regular list is that, in this list, there are two nodes whose next pointers are the same.
That means the repetition of next pointers indicates the existence of a loop. If there is a node with the same address then that indicates that some other node is pointing to the current node and we can say a loop exists.
Continue this process for all the nodes of the linked list. Does this method work? As per the algorithm, we are checking for the next pointer addresses, but how do we find the end of the linked list otherwise we will end up in an infinite loop? Note: If we start with a node in a loop, this method may work depending on the size of the loop.
Using Hash Tables we can solve this problem. This is possible only if the given linked list has a loop in it. Time Complexity; O n for scanning the linked list. Note that we are doing a scan of only the input.
Space Complexity; O n for hash table. Solution: No. Consider the following algorithm which is based on sorting. Time Complexity; O nlogn for sorting the next pointers array. Space Complexity; O n for the next pointers array. Problem with the above algorithm: The above algorithm works only if we can find the length of the list. But if the list has a loop then we may end up in an infinite loop. Due to this reason the algorithm fails. The solution is named the Floyd cycle finding algorithm.
It uses two pointers moving at different speeds to walk the linked list. Once they enter the loop they are expected to meet, which denotes that there is a loop. This works because the only way a faster moving pointer would point to the same location as a slower moving pointer is if somehow the entire list or a part of it is circular.
Think of a tortoise and a hare running on a track. The faster running hare will catch up with the tortoise if they are running in a loop. As an example, consider the following example and trace out the Floyd algorithm. From the diagrams below we can see that after the final step they are meeting at some point in the loop which may not be the starting point of the loop.
Note: slowPtr tortoise moves one pointer at a time and fastPtr hare moves two pointers at a time. There are two possibilities for L: it either ends snake or its last element points back to one of the earlier elements in the list snail. Give an algorithm that tests whether a given list L is a snake or a snail. Solution: It is the same as Problem If there is a cycle find the start node of the loop. Solution: The solution is an extension to the solution in Problem After finding the loop in the linked list, we initialize the slowPtr to the head of the linked list.
From that point onwards both slowPtr and fastPtr move only one node at a time. The point at which they meet is the start of the loop. Generally we use this method for removing the loops. Solution: This problem is at the heart of number theory. Furthermore, the tortoise is at the midpoint between the hare and the beginning of the sequence because of the way they move.
Solution: Yes, but the complexity might be high. Trace out an example. If there is a cycle, find the length of the loop. Solution: This solution is also an extension of the basic cycle detection problem. After finding the loop in the linked list, keep the slowPtr as it is. The fastPtr keeps on moving until it again comes back to slowPtr. While moving fastPtr, use a counter variable which increments at the rate of 1. Solution: Traverse the list and find a position for the element and insert it.
The element itself. The reverse of the second element followed by the first element. Space Complexity: O n ,for recursive stack. The head or start pointers of both the lists are known, but the intersecting node is not known. Also, the number of nodes in each of the lists before they intersect is unknown and may be different in each list. Give an algorithm for finding the merging point. Solution: Brute-Force Approach: One easy solution is to compare every node pointer in the first list with every other node pointer in the second list by which the matching node pointers will lead us to the intersecting node.
But, the time complexity in this case will be O mn which will be high. Time Complexity: O mn. Consider the following algorithm which is based on sorting and see why this algorithm fails. Any problem with the above algorithm? In the algorithm, we are storing all the node pointers of both the lists and sorting. But we are forgetting the fact that there can be many repeated elements.
This is because after the merging point, all node pointers are the same for both the lists. The algorithm works fine only in one case and it is when both lists have the ending node at their merge point. Space Complexity: O n or O m. By combining sorting and search techniques we can reduce the complexity. Space Complexity: O Max m, n. Solution: Brute-Force Approach: For each of the node, count how many nodes are there in the list, and see whether it is the middle node of the list.
The reasoning is the same as that of Problem Time Complexity: Time for creating the hash table. Space Complexity: O n. Since we need to create a hash table of size n. Solution: Efficient Approach: Use two pointers. Move one pointer at twice the speed of the second. When the first pointer reaches the end of the list, the second pointer will be pointing to the middle node. Solution: Traverse recursively till the end of the linked list. While coming back, start printing the elements.
Solution: Use a 2x pointer. Take a pointer that moves at 2x [two nodes at a time]. At the end, if the length is even, then the pointer will be NULL; otherwise it will point to the last node. Solution: Assume the sizes of lists are m and n. Solution: Refer Trees chapter.
Solution: Refer Sorting chapter. If the number of nodes in the list are odd then make first list one node extra than second list. As an example, consider the following circular list.
Solution: Algorithm: 1. Get the middle of the linked list. Reverse the second half of the linked list. Compare the first half and second half. Construct the original linked list by reversing the second half again and attaching it back to the first half. Else return. Otherwise, we can return the head. Create a linked list and at the same time keep it in a hash table. For n elements we have to keep all the elements in a hash table which gives a preprocessing time of O n.
Hence by using amortized analysis we can say that element access can be performed within O 1 time. Time Complexity — O 1 [Amortized]. Space Complexity - O n for Hash Table.
Find which person will be the last one remaining with rank 1. Solution: Assume the input is a circular linked list with N nodes and each node has a number range 1 to N associated with it.
The head node has number 1 as data. Give an algorithm for cloning the list. Solution: We can use a hash table to associate newly created nodes with the instances of node in the given list. We scan the original list again and set the pointers building the new list. Delete that node from the linked list.
So what do we do? We can easily get away by moving the data from the next node into the current node and then deleting the next node. Time Complexity: O 1. Solution: To solve this problem, we can use the splitting logic. While traversing the list, split the linked list into two: one contains all even nodes and the other contains all odd nodes. Now, to get the final list, we can simply append the odd node linked list after the even node linked list.
To split the linked list, traverse the original linked list and move all odd nodes to a separate linked list of all odd nodes. At the end of the loop, the original list will have all the even nodes and the odd node list will have all the odd nodes.
To keep the ordering of all nodes the same, we must insert all the odd nodes at the end of the odd node list. Solution: For this problem the value of n is not known in advance. Solution: For this problem the value of n is not known in advance and it is the same as finding the kth element from the end of the the linked list. Assume the value of n is not known in advance. The other steps run in O 1. Therefore the total time complexity is O min n,m. If we have an even number of elements, the median is the average of two middle numbers in a sorted list of numbers.
We can solve this problem with linked lists with both sorted and unsorted linked lists. First, let us try with an unsorted linked list. In an unsorted linked list, we can insert the element either at the head or at the tail. The disadvantage with this approach is that finding the median takes O n. Also, the insertion operation takes O 1. Now, let us try with a sorted linked list. Insertion to a particular location is also O 1 in any linked list.
Note: For an efficient algorithm refer to the Priority Queues and Heaps chapter. The result should be stored in the third linked list. Also note that the head node contains the most significant digit of the number.
Solution: Since the integer addition starts from the least significant digit, we first need to visit the last node of both lists and add them up, create a new node to store the result, take care of the carry if any, and link the resulting node to the node which will be added to the second least significant node and continue. First of all, we need to take into account the difference in the number of digits in the two numbers. So before starting recursion, we need to do some calculation and move the longer list pointer to the appropriate place so that we need the last node of both lists at the same time.
The other thing we need to take care of is carry. If two digits add up to more than 10, we need to forward the carry to the next node and add it. If the most significant digit addition results in a carry, we need to create an extra node to store the carry.
The function below is actually a wrapper function which does all the housekeeping like calculating lengths of lists, calling recursive implementation, creating an extra node for the carry in the most significant digit, and adding any remaining nodes left in the longer list.
Time Complexity: O max List1 length,List2 length. Space Complexity: O min List1 length, List1 length for recursive stack. Note: It can also be solved using stacks. Solution: Simple Insertion sort is easily adabtable to singly linked lists. To insert an element, the linked list is traversed until the proper position is found, or until the end of the list is reached.
It is inserted into the list by merely adjusting the pointers without shifting any elements, unlike in the array. This reduces the time required for insertion but not the time required for searching for the proper position.
Solution: Find the middle of the linked list. We can do it by slow and fast pointer approach. After finding the middle node, we reverse the right halfl then we do a in place merge of the two halves of the linked list. Solution: The solution is based on merge sort logic. Assume the given two linked lists are: list1 and list2. Since the elements are in sorted order, we run a loop till we reach the end of either of the list.
We compare the values of list1 and list2. If the values are equal, we add it to the common list. A stack is a simple data structure used for storing data similar to Linked Lists. In a stack, the order in which the data arrives is important. A pile of plates in a cafeteria is a good example of a stack.
The plates are added to the stack as they are cleaned and they are placed on the top. When a plate, is required it is taken from the top of the stack.
The first plate placed on the stack is the last one to be used. Definition: A stack is an ordered list in which insertion and deletion are done at one end, called top. The last element inserted is the first one to be deleted. Special names are given to the two changes that can be made to a stack. When an element is inserted in a stack, the concept is called push, and when an element is removed from the stack, the concept is called pop.
Trying to pop out an empty stack is called underflow and trying to push an element in a full stack is called overflow. Generally, we treat them as exceptions. Let us assume a developer is working on a long-term project.
The manager then gives the developer a new task which is more important. The developer puts the long-term project aside and begins work on the new task. The phone rings, and this is the highest priority as it must be answered immediately. The developer pushes the present task into the pending tray and answers the phone.
When the call is complete the task that was abandoned to answer the phone is retrieved from the pending tray and work progresses. To take another call, it may have to be handled in the same manner, but eventually the new task will be finished, and the developer can draw the long-term project from the pending tray and continue with that.
For simplicity, assume the data is an integer type. Exceptions Attempting the execution of an operation may sometimes cause an error condition, called an exception.
In the Stack ADT, operations pop and top cannot be performed if the stack is empty. Attempting the execution of pop top on an empty stack throws an exception.
Trying to push an element in a full stack throws an exception. In the array, we add elements from left to right and use a variable to keep track of the index of the top element. The array storing the stack elements may become full. A push operation will then throw a full stack exception. Similarly, if we try deleting an element from an empty stack it will throw stack empty exception.
Trying to push a new element into a full stack causes an implementation-specific exception. We took one index variable top which points to the index of the most recently inserted element in the stack. To insert or push an element, we increment top index and then place the new element at that index. Similarly, to delete or pop an element we take the element at top index and then decrement the top index. We represent an empty queue with top value equal to —1. The issue that still needs to be resolved is what we do when all the slots in the fixed size array stack are occupied?
First try: What if we increment the size of the array by 1 every time the stack is full? This way of incrementing the array size is too expensive. Let us see the reason for this. Alternative Approach: Repeated Doubling Let us improve the complexity by using the array doubling technique. If the array is full, create a new array of twice the size, and copy the items. With this approach, pushing n items takes time proportional to n not n2. That means, we do the doubling at 1,2,4,8, If we observe carefully, we are doing the doubling operation logn times.
Now, let us generalize the discussion. For n push operations we double the array size logn times. That means, we will have logn terms in the expression below. The total time T n of a series of n push operations is proportional to T n is O n and the amortized time of a push operation is O 1.
Performance Let n be the number of elements in the stack. Linked List Implementation The other way of implementing stacks is by using Linked lists. Push operation is implemented by inserting element at the beginning of the list. We start with an empty stack represented by an array of size 1. Note: For analysis, refer to the Implementation section. Solution: Stacks can be used to check whether the given expression has balanced symbols. This algorithm is very useful in compilers.
Each time the parser reads one character at a time. The opening and closing delimiters are then compared. If they match, the parsing of the string continues. If they do not match, the parser indicates that there is an error on the line. A linear-time O n algorithm based on stack can be given as: Algorithm: a Create a stack. Otherwise pop the stack. Since we are scanning the input only once. Space Complexity: O n [for stack]. Solution: Before discussing the algorithm, first let us see the definitions of infix, prefix and postfix expressions.
Infix: An infix expression is a single letter, or an operator, proceeded by one infix string and followed by another Infix string. Prefix: A prefix expression is a single letter, or an operator, followed by two prefix strings.
Every prefix string longer than a single variable contains an operator, first operand and second operand. Postfix: A postfix expression also called Reverse Polish Notation is a single letter or an operator, preceded by two postfix strings.
Every postfix string longer than a single variable contains first and second operands followed by an operator. Prefix and postfix notions are methods of writing mathematical expressions without parenthesis. Time to evaluate a postfix and prefix expression is O n , where n is the number of elements in the array. Now, let us focus on the algorithm. Therefore, for the infix to postfix conversion algorithm we have to define the operator precedence or priority inside the algorithm. The table shows the precedence and their associativity order of evaluation among operators.
Notice that between infix and postfix the order of the numbers or operands is unchanged. It is 2 3 4 in both cases. The stack that we use in the algorithm will be used to change the order of operators from infix to postfix. Postfix expressions do not contain parentheses. We shall not output the parentheses in the postfix output.
Solution: Algorithm: 1 Scan the Postfix string from left to right. If the operator is a binary operator, then pop two elements from the stack. After popping the elements, apply the operator to those popped elements. Let the result of this operation be retVal onto the stack. Example: Let us see how the above-mentioned algorithm works using an example.
Initially the stack is empty. Now, the first three characters scanned are 1, 2 and 3, which are operands. They will be pushed into the stack in that order. The second operand will be the first element that is popped. The value of the expression that has been evaluated 23 is pushed into the stack. Now, since all the characters are scanned, the remaining element in the stack there will be only one element in the stack will be returned.
Solution: Using 2 stacks we can evaluate an infix expression in 1 pass without converting to postfix. Get the next token in the infix string b. If next token is an operand, place it on the operand stack c. If next token is an operator i. Solution: Take an auxiliary stack that maintains the minimum of all values in the stack. Also, assume that each element of the stack is less than its below elements. For simplicity let us call the auxiliary stack min stack.
When we pop the main stack, pop the min stack too. When we push the main stack, push either the new element or the current minimum, whichever is lower. At any point, if we want to get the minimum, then we just need to return the top element from the min stack.
Let us take an example and trace it out. Initially let us assume that we have pushed 2, 6, 4, 1 and 5. Time complexity: O 1. Space complexity: O n [for Min stack]. The main problem of the previous approach is, for each push operation we are pushing the element on to min stack also either the new element or existing minimum element.
That means, we are pushing the duplicate minimum elements on to the stack. Now, let us change the algorithm to improve the space complexity. We still have the min stack, but we only pop from it when the value we pop from the main stack is equal to the one on the min stack.
We only push to the min stack when the value being pushed onto the main stack is less than or equal to the current min value. In this modified algorithm also, if we want to get the minimum then we just need to return the top element from the min stack. Solution: The number of stack permutations with n symbols is represented by Catalan number and we will discuss this in the Dynamic Programming chapter. The string is marked with special character X which represents the middle of the list for example: ababa Check whether the string is palindrome.
Solution: This is one of the simplest algorithms. What we do is, start two indexes, one at the beginning of the string and the other at the end of the string. Each time compare whether the values at both the indexes are the same or not.
If the values are not the same then we say that the given string is not a palindrome. If the values are the same then increment the left index and decrement the right index. Continue this process until both the indexes meet at the middle at X or if the string is not palindrome. Solution: Refer Linked Lists chapter. If they are the same then pop the stack and go to the next element in the input list.
Space Complexity: O n , for recursive stack. Analyze the running time of the queue operations. Solution: Refer Queues chapter.
Analyze the running time of the stack operations. Our stack routines should not indicate an exception unless every slot in the array is used? Time Complexity of push and pop for both stacks is O 1. Space Complexity is O 1. Solution: For this problem, there could be other ways of solving it. Given below is one possibility and it works as long as there is an empty space in the array. To implement 3 stacks we keep the following information. Now, let us define the push and pop operations for this implementation.
If so, try to shift the third stack upwards. If so, try to shift the third stack downward. Insert the new element at start2 - Top2. If so, try to shift the third stack downward and try pushing again. Since we may need to adjust the third stack. When either the left stack which grows to the right or the right stack which grows to the left bumps into the middle stack, we need to shift the entire middle stack to make room.
The same happens if a push on the middle stack causes it to bump into the right stack. To solve the above-mentioned problem number of shifts what we can do is: alternating pushes can be added at alternating sides of the middle list For example, even elements are pushed to the left, odd elements are pushed to the right.
This would keep the middle stack balanced in the center of the array but it would still need to be shifted when it bumps into the left or right stack, whether by growing on its own or by the growth of a neighboring stack. If we put it at the left, then the middle stack will eventually get pushed against it and leave a gap between the middle and right stacks, which grow toward each other.
There is no change in the time complexity but the average number of shifts will get reduced. Solution: Let us assume that array indexes are from 1 to n. Similar to the discussion in Problem- 15, to implement m stacks in one array, we divide the array into m parts as shown below.
The size of each part is. From the above representation we can see that, first stack is starting at index 1 starting index is stored in Base[l] , second stack is starting at index starting index is stored in Base[2] , third stack is starting at index starting index is stored in Base[3] , and so on.
Similar to Base array, let us assume that Top array stores the top indexes for each of the stack. Consider the following terminology for the discussion. Since we may need to adjust the stacks. The only case to check is stack empty case. Let the numbers 1,2,3,4,5,6 be pushed on to this stack in the order they appear from left to right.
Let 5 indicate a push and X indicate a pop operation. Can they be permuted in to the order output and order ? Solution: Let us assume that the initial stack size is 0. That means, for a given n value, we are creating the new arrays at: The total number of copy operations is: If we are performing n push operations, the cost per operation is O logn.
Solution: Given a string of length 2n, we wish to check whether the given string of operations is permissible or not with respect to its functioning on a stack. The only restricted operation is pop whose prior requirement is that the stack should not be empty.
Hence the condition is at any stage of processing of the string, the number of push operations S should be greater than the number of pop operations X. Also, the number of nodes in each of the lists before they intersect are unknown and both lists may have a different number. Can we find the merging point using stacks? For algorithm refer to Linked Lists chapter. Spans are used in financial analysis E.
The span of a stock price on a certain day, i, is the maximum number of consecutive days up to the current day the price of the stock has been less than or equal to its price on i. As an example, let us consider the table and the corresponding spans diagram. In the figure the arrows indicate the length of the spans. Now, let us concentrate on the algorithm for finding the spans. Solution: From the example above, we can see that span S[i] on day i can be easily calculated if we know the closest day preceding i, such that the price is greater on that day than the price on day i.
Let us call such a day as P. Time Complexity: Each index of the array is pushed into the stack exactly once and also popped from the stack at most once. The statements in the while loop are executed at most n times. Even though the algorithm has nested loops, the complexity is O n as the inner loop is executing only n times during the course of the algorithm trace out an example and see how many times the inner loop becomes successful. For simplicity, assume that the rectangles have equal widths but may have different heights.
For example, the figure on the left shows a histogram that consists of rectangles with the heights 3,2,5,6,1,4,4, measured in units where 1 is the width of the rectangles. Here our problem is: given an array with heights of rectangles assuming width is 1 , we need to find the largest rectangle possible. For the given example, the largest rectangle is the shared part. Solution: A straightforward answer is to go to each bar in the histogram and find the maximum possible area in the histogram for it.
Finally, find the maximum of these values. This will require O n2. Solution: Linear search using a stack of incomplete sub problems: There are many ways of solving this problem.
Judge has given a nice algorithm for this problem which is based on stack. Process the elements in left-to-right order and maintain a stack of information about started but yet unfinished sub histograms. If the stack is empty, open a new sub problem by pushing the element onto the stack. Otherwise compare it to the element on top of the stack. If the new one is greater we again push it.
If the new one is equal we skip it. In all these cases, we continue with the next new element. If the new one is less, we finish the topmost sub problem by updating the maximum area with respect to the element at the top of the stack. Then, we discard the element at the top, and repeat the procedure keeping the current new element.
This way, all sub problems are finished when the stack becomes empty, or its top element is less than or equal to the new element, leading to the actions described above. If all elements have been processed, and the stack is not yet empty, we finish the remaining sub problems by updating the maximum area with respect to the elements at the top. At the first impression, this solution seems to be having O n2 complexity.
But if we look carefully, every element is pushed and popped at most once, and in every step of the function at least one element is pushed or popped.
Since the amount of work for the decisions and the update is constant, the complexity of the algorithm is O n by amortized analysis.
Solution: Try noting down the address of a local variable. Call another function with a local variable declared in it and check the address of that local variable and compare. The pairs can be increasing or decreasing, and if the stack has an odd number of elements, the element at the top is left out of a pair. For example, if the stack of elements are [4, 5, -2, -3, 11, 10, 5, 6, 20], then the output should be true because each of the pairs 4, 5 , -2, -3 , 11, 10 , and 5, 6 consists of consecutive numbers.
Solution: Refer to Queues chapter. The output string should not have any adjacent duplicates. Input: careermonk Input: mississippi Output: camonk Output: m Solution: This solution runs with the concept of in-place stack.
When it matches to stack top, we skip characters until the element matches the top of stack and remove the element from stack. Space Complexity: O 1 as the stack simulation is done inplace. Solution: One simple approach would involve scanning the array elements and for each of the elements, scan the remaining elements and find the nearest greater element. Solution: The approach is pretty much similar to Problem Create a stack and push the first element. For the rest of the elements, mark the current element as nextNearestGreater.
If stack is not empty, then pop an element from stack and compare it with nextNearestGreater. If nextNearestGreater is greater than the popped element, then nextNearestGreater is the next greater element for the popped element.
Keep popping from the stack while the popped element is smaller than nextNearestGreater. If nextNearestGreater is smaller than the popped element, then push the popped element back. Solution: We can use a LinkedList data structure with an extra pointer to the middle element. Also, we need another variable to store whether the LinkedList has an even or odd number of elements.
Update the pointer to the middle element according to variable. A queue is a data structure used for storing data similar to Linked Lists and Stacks. In queue, the order in which data arrives is important. In general, a queue is a line of people or things waiting to be served in sequential order starting at the beginning of the line or sequence. Definition: A queue is an ordered list in which insertions are done at one end rear and deletions are done at other end front.
The first element to be inserted is the first one to be deleted. Similar to Stacks, special names are given to the two changes that can be made to a queue. When an element is inserted in a queue, the concept is called EnQueue, and when an element is removed from the queue, the concept is called DeQueue.
DeQueueing an empty queue is called underflow and EnQueuing an element in a full queue is called overflow. The concept of a queue can be explained by observing a line at a reservation counter. When we enter the line we stand at the end of the line and the person who is at the front of the line is the one who will be served next. He will exit the queue and be served. As this happens, the next person will come at the head of the line, will exit the queue and will be served.
As each person at the head of the line keeps exiting the queue, we move towards the head of the line. Finally we will reach the head of the line and we will exit the queue and be served. This behavior is very useful in cases where there is a need to maintain the order of arrival.
Insertions and deletions in the queue must follow the FIFO scheme. For simplicity we assume the elements are integers. First, let us see whether we can use simple arrays for implementing queues as we have done for stacks. We know that, in queues, the insertions are performed at one end and deletions are performed at the other end.
After performing some insertions and deletions the process becomes easy to understand. In the example shown below, it can be seen clearly that the initial slots of the array are getting wasted.
So, simple array implementation for queue is not efficient. To solve this problem we assume the arrays as circular arrays. With this representation, if there are any free slots at the beginning, the rear pointer can easily go to its next free slot.
Note: The simple circular array and dynamic circular array implementations are very similar to stack array implementations. Refer to Stacks chapter for analysis of these implementations.
In the array, we add elements circularly and use two variables to keep track of the start element and end element. Generally, front is used to indicate the start element and rear is used to indicate the end element in the queue. The array storing the queue elements may become full. An EnQueue operation will then throw a full queue exception. Similarly, if we try deleting an element from an empty queue it will throw empty queue exception.
Note: Initially, both front and rear points to -1 which indicates that the queue is empty. Trying to EnQueue a new element into a full queue causes an implementation-specific exception.
EnQueue operation is implemented by inserting an element at the end of the list. DeQueue operation is implemented by deleting an element from the beginning of the list.
To access the queue, we are only allowed to use the methods of queue ADT. Solution: Let SI and S2 be the two stacks to be used in the implementation of queue.
All we have to do is to define the EnQueue and DeQueue operations for the queue. Time Complexity: From the algorithm, if the stack S2 is not empty then the complexity is O 1. If the stack S2 is empty, then we need to transfer the elements from SI to S2. But if we carefully observe, the number of transferred elements and the number of popped elements from S2 are equal.
Due to this the average complexity of pop operation in this case is O 1. The amortized complexity of pop operation is O 1. One of the queues will be used to store the elements and the other to hold them temporarily during the pop and top methods. The push method would enqueue the given element onto the storage queue. The top method would transfer all but the last element from the storage queue onto the temporary queue, save the front element of the storage queue to be returned, transfer the last element to the temporary queue, then transfer all elements back to the storage queue.
The pop method would do the same as top, except instead of transferring the last element onto the temporary queue after saving it for return, that last element would be discarded. Let Q1 and Q2 be the two queues to be used in the implementation of stack. All we have to do is to define the push and pop operations for the stack. In the algorithms below, we make sure that one queue is always empty.
Push Operation Algorithm: Insert the element in whichever queue is not empty. If Q1 is empty then Enqueue the element into Q2.
0コメント