https://www.google.com/intl/en_us/about/careers/lifeatgoogle/hiringprocess/
Core of how we hire/ Noogler Things move quickly at Internet speed at Google. good for Google, for the long term. smart team-oriented get things done good for the role, be nimble both in how we work. be nimble both in how we hire. nimble -- quick and light in movement or action; agile great (at lots of things)*** love big challenges and welcome big changes. not too many specialists, in just one particular area. Interview process/ path to getting hired usually involves a first conversation with a recruiter, a phone interview and an onsite (interviews)*** at one of our offices. a few things make getting hired at Google a little different. How we interview You’ll likely interview with 4 or 5 Googlers. They’re looking for 4 things: Leadership how you flexed different muscles in different situations in order to mobilize a team? (((((((((mobilize -- prepare and organize (troops) for active service))))))))) when not officially appointed as the leader. asserting a leadership role (((((((((assert -- state a fact or belief confidently and forcefully.))))))))) at work or with an organization, or by helping a team succeed Role-Related Knowledge have a variety of strengths and passions, not just isolated skill sets. have the experience and the background that will set you up for success in your role. engineering candidates? have coding skills & technical areas of expertise. How You Think more interested? how you think. role-related answers that provide insight into how you solve problems. Show how to tackle the problem presented--don’t get hung up on nailing the “right” answer. less concerned? about grades and transcripts Googleyness Show a feel of what makes you? We also want to make sure this is a place you’ll thrive, so we’ll be looking for signs around your comfort with ambiguity, your bias to action and your collaborative nature. How we decide Hiring the right candidate for the right role and for Google. We collect feedback from multiple Googlers you work on tons of projects with different groups of Googlers, across many teams and time zones. some of your interviewers could be potential teammates, some interviewers will be with other teams. helps Google to see how you might collaborate and fit in overall. Independent committees of Googlers help us ensure we’re hiring for the long term An independent committee of Googlers review feedback from all of the interviewers. This committee is responsible for ensuring our hiring process is fair and that we’re holding true to our “good for Google” standards as we grow. We believe that if you hire great people and involve them intensively in the hiring process, you’ll get more great people. Over the past couple of years, we’ve spent a lot of time making our hiring process as efficient as possible reducing time-to-hire and increasing our communications to candidates. While involving Googlers in our process does take longer, we believe it’s worth it. Our early Googlers identified these principles more than ten years ago, and it’s what allows us to hold true to who we are as we grow. These core principles are true across Google, but when it comes to specifics, there are some pieces of our process that look a little different across teams. Our recruiters can help you navigate through these as the time comes. At Google, we don’t just accept difference - we celebrate it, we support it, and we thrive on it for the benefit of our employees, our products and our community. Google is proud to be an equal opportunity workplace and (((((((the aim is that important jobs should go to those "most qualified" – persons most likely to perform ably in a given task – and not go to persons for arbitrary or irrelevant reasons, such as circumstances of birth, upbringing, having well-connected relatives or friends,[3] religion, sex,[4] ethnicity,[4] race, caste,[5] or involuntary personal attributes such as disability, age, gender identity, or sexual orientation.)))))) is an affirmative action employer. (((((((affirmative action has sought to achieve goals such as bridging inequalities in employment and pay, increasing access to education, promoting diversity, and redressing apparent past wrongs, harms, or hindrances.)))))))
https://www.google.com/intl/en_us/about/careers/lifeatgoogle/google-sales-resume-interview-prep.html
Hangout on Air: Google sales resume & interview prep Authenticity. Creativity. Metrics. Watch this Hangout on Air to learn what our Sales Staffing team looks for in Sales candidates, how to write your resume and the best ways to prepare for your interview. Go getters Self starters Creative For sales: Passion about our products +1 thanks for hanging out
https://www.google.com/intl/en_us/about/careers/lifeatgoogle/hangout-on-air-tech-interviewing.html
Hangout on Air: Candidate coaching session - Tech interviewing code + basic ds/algorithms how to use & apply to different algorithms what ll heap hash table not implement but know how it works dont have to prove big O but still know how they work and say that how they work why use them why use one over other ds is MUST right ds is best sub optimal is ok practice if 10+ may use custom ds so practice basic ds write on whiteboad notebook no pseudocode write real code c++/java/c javascript(UI) python most important (interviewer cannot read your mind?) how you approach the problem how you work work & finalAnswer are equal tell what you are thinking/ brainstrom did you understand the problem? how you think through the problem work through the problem helps interviewer to have a clear idea of what step in the process you are at what you are thinking is super important possibly, to guide you or, you are already on right track questions are in-depth, they won't have an easy solution might have a simple solution most have simple inefficient solution also a better one OR wont have a simple solution then, we expand it to cover other situations how you work how you approach the problem like, given the 1st solution how you expand it to other roles - SUPER IMPORTANT take problem understand it break it down solve it problem what is given? given1 given2 given3 what are your assumptions? assumption1 assumption2 assumption3 record change in assumption? - (gets complicated question) assumption1 to assumption11 changed, what else changed? assumption2 to assumption21 changed, what else changed? assumption3 to assumption31 changed, what else changed? give a solution change it to make it better, as possible (better solution is more complicated) these will fill more time in interview don't worry if you reach the end of question 1. how well you handled different questions 2. what you know sample question: peanutbutter --> peanut butter questions are under specified for a reason what questions will you ask to answer it? so, dont write the code, right away ask questions as you have/ figure out some of these constraints are clarify the problem fastman -- "fast man" disambiguate the expected result TEST CASES: a) what if we have multiple words interviewer answer may be: it might not do that a2) what if one word b) what if we have empty string (no words) interviewer answer may be: it wont do anything c) what if we have NULL string d) all numbers? *) if interviewer gave examples, then run through them, there is a reason why they gave -) edge cases (VERY IMPORTANT) - make sure recursion terminates it shows that you understand, how to solve this problem understand the problem show you understand the problem make sure interviewer and you are thinking about same problem/solution avoid the communication problem state and clarify key assumptions expected result? any memory/ speed requirements? where do the words come from? interviewer answer may be: use a dictionary (or other) clarify the function signature start with first solution that comes to mind may be, straightforward, not optimal solution/ brute force/ n squared solution we get code sample interesting conversation run through initial 2/3 examples to check (VERY IMPORTANT) pretend you are a debugger (any < instead of >?, etc.) check (TEST CASES) above have reasonable variables names clean up the code ask the interviewer if they have any questions may be, gives an edge case what's the run time of this memory usage refine the solution? faster solution O (N logN) to N OR logN more details of questions cane come in: peanut could be "pea nut" too i know there is a way, dont remember what it is (may be .length OR .size?, etc.) dont worry if you dont know the name of api if you dont know how api behaves ask what if dictionary cant fit in memory what is the idea of how to approach the problem (ok if can't code), that works for scale that's pretty big for Google? when multiple solutions, what is most likely solution? peanutbutter -- pea nut butter OR peanut butter? what if words are spelled incorrectly? (super advanced problem) - some kind of spell check algorithm good interview is a really jus a conversation between two engineers you have a problem you kind of work through the problem you talk about different solutions here is the problem i understand how to solve it this is kind of what i want to do does this sound good? optimizing solution see if you can come up with reasonably efficient one than naive solution optimize time optimize space pre-compute things? is that a reasonable idea? what if input is significantly bigger? doesn't fit in the memory? max length given of dictionary word max length given of word to break? api forgot name OR behaviour? just ask testing interview given tests? which tests can fail, under different circumstances? off by 1 know ds binary tree college grads/ industry experience analytical skills how you approach neveer seen, how you adapt sound design do while writing code understand limit or corner cases error checking null or zero make note also write code clean like-production grade code
https://www.google.com/intl/en_us/about/careers/lifeatgoogle/googles-recruiter-on-how-to-build-a-superstar-team.html https://www.entrepreneur.com/article/217877
Google's Recruiter on How to Build a Superstar Team Todd Carlisle, staffing director for our business teams, knows to look for people who can think quick on their feet. He talks about the best ways to build a fantastic team through finding candidates with the "startup" mentality - raw intellect, learning agility, diversity, leadership and innovation. Carlisle looks for raw intellect, learning agility, diversity, leadership and innovation in résumés--but to find that startup mentality, he says, you have to ask the right questions during the interview. Finally, you have to want to take them out to lunch after the interview. It's important to hire leaders who play well with others, so ask about their experiences working on a team. Their bragging that they convinced everyone else that they were right or taking credit for everything are big red flags. Carlisle also asks candidates to talk of a time they really screwed up--and that whole spin-your-weaknesses-into-strengths strategy won't cut it. "You have to be humble and talk about what you learned, because you don't win all the time," he says. "That's not how real life works." "What's motivating you right now and how can I give you more of that?" "People's needs--money, promotions, cool projects--will change," he says, "but if you give them what they want, your staff won't." (((((((quit?)))))))
Web/Internet Technologies: The communication protocols, languages/APIs, and other mechanisms that enable the internet to function. 9:00 HTTP, 9:30 Browsers, 10:00 DNS, IPv4 Data Packet https://commons.wikimedia.org/wiki/File:IPv4_header_(1).png 10:30 HTML/XML, IP_address https://simple.wikipedia.org/wiki/IP_address 11:00 AJAX, etc. Hostname https://en.wikipedia.org/wiki/Hostname MAC Brush up on HTTP Protocol basics: 11:30 Part I, READ AGAIN https://code.tutsplus.com/tutorials/http-the-protocol-every-web-developer-must-know-part-1--net-31177 12:00 Part II https://code.tutsplus.com/tutorials/http-the-protocol-every-web-developer-must-know-part-2--net-31155 continue from here: https://www.w3schools.com/html/html_css.asp 12:30 Databases/SQL/NoSQL: 13:00 Data modeling fundamentals, ACID (Atomicity, Consistency, Isolation, Durability) is a set of properties of database transactions intended to guarantee validity even in the event of errors, power failures, etc. 13:30 database architecture/efficiency, Atomicity Atomicity requires that each transaction be "all or nothing": if one part of the transaction fails, then the entire transaction fails, and the database state is left unchanged. An atomic system must guarantee atomicity in each and every situation, including power failures, errors and crashes. 14:00 SQL commands/syntax, Consistency The consistency property ensures that any transaction will bring the database from one valid state to another. Any data written to the database must be valid according to all defined rules, including constraints, cascades, triggers, and any combination thereof. 14:30 complex query design, etc. Isolation The isolation property ensures that the concurrent execution of transactions results in a system state that would be obtained if transactions were executed sequentially, i.e., one after the other. Providing isolation is the main goal of concurrency control durability The durability property ensures that once a transaction has been committed, it will remain so, even in the event of power loss, crashes, or errors. In a relational database, for instance, once a group of SQL statements execute, the results need to be stored permanently (even if the database crashes immediately thereafter). To defend against power loss, transactions (or their effects) must be recorded in a non-volatile memory. Linux/Unix: 15:00 Must be comfortable working in a Linux environment (as a user, not an admin) and CAP theorem distributed data store is a computer network where information is stored on more than one node, often in a replicated fashion will be expected to have a good working knowledge of Replication data replication if the same data is stored on multiple storage devices 15:30 user-level Linux commands, Replication in computing involves sharing information so as to ensure consistency between redundant resources, such as software or hardware components, to improve reliability, fault-tolerance, or accessibility. 16:00 shell scripting, Availability Every request receives a (non-error) response – without guarantee that it contains the most recent write 16:30 regular expressions, etc. Consistency Every read receives the most recent write or an error Partition tolerance The system continues to operate despite an arbitrary number of messages being dropped (or delayed) by the network between nodes Troubleshooting: Interviewers are looking for a 17:00 logical and 17:30 structured approach 18:00 to problem solving through 18:30 distributed systems, 19:00 network and web scenarios. Make sure you 19:30 understand the questions and 20:00 ask appropriate follow-up questions to the interviewer if you need clarification. A big part is 20:30 finding out what the actual problem is and 21:00 breaking it down into specifics. 21:30 <<<<< Check out Life in App Engine Production for a troubleshooting example. >>>> Algorithms/Data Structure: 22:00 Dig out your CS textbook and 22:30 brush up on your basic algorithm theory including 23:00 sorting and 23:30 Big-O notation. Data structure topics may include 0:00 linked lists, 0:30 binary trees, 1:00 hash tables, etc. Programming/OO: 1:30 You will be asked to write some code in at least one of the interviews (in your preferred language). Syntax is not as important as 2:00 structured thinking, 2:30 but proper syntax never hurts. Object oriented 3:00 theory and 3:30 concepts may also be covered. --------------------------- detailed The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, and hypermedia information systems.[1] HTTP is the foundation of data communication for the World Wide Web. HTTP is the protocol to exchange or transfer hypertext. Hypertext is structured text that uses logical links (hyperlinks) between nodes containing text. Year HTTP Version 1991 0.9 1996 1 1997 1.1 2015 2 An HTTP session is a sequence of network request-response transactions. An HTTP client initiates a request by establishing a Transmission Control Protocol (TCP) connection to a particular port on a server (typically port 80, occasionally port 8080; see List of TCP and UDP port numbers). An HTTP server listening on that port waits for a client's request message. Upon receiving the request, the server sends back a status line, such as "HTTP/1.1 200 OK", and a message of its own. The body of this message is typically the requested resource, although an error message or other information may also be returned.[1] HTTP authentication[edit] HTTP provides multiple authentication schemes such as basic access authentication and digest access authentication which operate via a challenge-response mechanism whereby the server identifies and issues a challenge before serving the requested content. HTTP provides a general framework for access control and authentication, via an extensible set of challenge-response authentication schemes, which can be used by a server to challenge a client request and by a client to provide authentication information.[12] Request methods[edit] HTTP defines methods (sometimes referred to as verbs) to indicate the desired action to be performed on the identified resource. What this resource represents, whether pre-existing data or data that is generated dynamically, depends on the implementation of the server. Often, the resource corresponds to a file or the output of an executable residing on the server. The HTTP/1.0 specification[13] defined the GET, POST and HEAD methods and the HTTP/1.1 specification[14] added 5 new methods: OPTIONS, PUT, DELETE, TRACE and CONNECT. By being specified in these documents their semantics are well known and can be depended on. Any client can use any method and the server can be configured to support any combination of methods. If a method is unknown to an intermediate it will be treated as an unsafe and non-idempotent method. (From a RESTful service standpoint, an operation/service is idempotent, if clients can make that same call repeatedly, while producing the same result.) There is no limit to the number of methods that can be defined and this allows for future methods to be specified without breaking existing infrastructure. For example, WebDAV defined 7 new methods and RFC 5789 specified the PATCH method. Main article: List of HTTP header fields GET The GET method requests a representation of the specified resource. Requests using GET should only retrieve data and should have no other effect. (This is also true of some other HTTP methods.) The W3C has published guidance principles on this distinction, saying, Web application design should be informed by the above principles, but also by the relevant limitations. See safe methods below. HEAD The HEAD method asks for a response identical to that of a GET request, but without the response body. This is useful for retrieving meta-information written in response headers, without having to transport the entire content. POST The POST method requests that the server accept the entity enclosed in the request as a new subordinate of the web resource identified by the URI. The data POSTed might be, for example, an annotation for existing resources; a message for a bulletin board, newsgroup, mailing list, or comment thread; a block of data that is the result of submitting a web form to a data-handling process; or an item to add to a database.[16] PUT The PUT method requests that the enclosed entity be stored under the supplied URI. If the URI refers to an already existing resource, it is modified; if the URI does not point to an existing resource, then the server can create the resource with that URI.[17] user/123 create user/123 modify user/234 create DELETE The DELETE method deletes the specified resource. TRACE The TRACE method echoes the received REQUEST so that a client can see what (if any) changes or additions have been made by intermediate servers. OPTIONS The OPTIONS method returns the HTTP methods that the server supports for the specified URL. This can be used to check the functionality of a web server by requesting '*' instead of a specific resource. CONNECT The CONNECT method converts the request connection to a transparent TCP/IP tunnel, usually to facilitate SSL-encrypted communication (HTTPS) through an unencrypted HTTP proxy.[19][20] See HTTP CONNECT tunneling. PATCH The PATCH method applies partial modifications to a resource.[21] nosql https://en.wikipedia.org/wiki/NoSQL graph db https://upload.wikimedia.org/wikipedia/commons/3/3a/GraphDatabase_PropertyGraph.png https://academy.datastax.com/resources/getting-started-graph-databases http://tinkerpop.apache.org/ http://tinkerpop.apache.org/docs/current/reference/ https://tinkerpop.apache.org/gremlin.html Apache TinkerPop™ Apache TinkerPop™ is a graph computing framework for both graph databases (OLTP) and graph analytic systems (OLAP). Apache TinkerPop™ is an open source, vendor-agnostic, graph computing framework distributed under the commercial friendly Apache2 license. When a data system is TinkerPop-enabled, its users are able to model their domain as a graph and analyze that graph using the Gremlin graph traversal language. // What are the names of Gremlin's friends' friends? g.V().has("name","gremlin"). out("knows").out("knows").values("name") // What are the names of projects that were created by two friends? g.V().match( as("a").out("knows").as("b"), as("a").out("created").as("c"), as("b").out("created").as("c"), as("c").in("created").count().is(2)). select("c").by("name") // What are the names of the managers in // the management chain going from Gremlin to the CEO? g.V().has("name","gremlin"). repeat(in("manages")).until(has("title","ceo")). path().by("name") // What is the distribution of job titles amongst Gremlin's collaborators? g.V().has("name","gremlin").as("a"). out("created").in("created"). where(neq("a")). groupCount().by("title") // Get a ranking of the most relevant products for Gremlin given his purchase history. g.V().has("name","gremlin").out("bought").aggregate("stash"). in("bought").out("bought"). where(not(within("stash"))). groupCount(). order(local).by(values,decr) Gremlin is the graph traversal language of Apache TinkerPop. Gremlin is a functional, data-flow language that enables users to succinctly express complex traversals on (or queries of) their application's property graph. Every Gremlin traversal is composed of a sequence of (potentially nested) steps. A step performs an atomic operation on the data stream. Every step is either a map-step (transforming the objects in the stream), a filter-step (removing objects from the stream), or a sideEffect-step (computing statistics about the stream). The Gremlin step library extends on these 3-fundamental operations to provide users a rich collection of steps that they can compose in order to ask any conceivable question they may have of their data for Gremlin is Turing Complete. What are the names of Gremlin's friends' friends? Get the vertex with name "gremlin." Traverse to the people that Gremlin knows. Traverse to the people those people know. Get those people's names. g.V().has("name","gremlin"). out("knows"). out("knows"). values("name") g.V().match( as("a").out("knows").as("b"), as("a").out("created").as("c"), as("b").out("created").as("c"), as("c").in("created").count().is(2)). select("c").by("name") What are the names of the projects created by two friends? ...there exists some "a" who knows "b". ...there exists some "a" who created "c". ...there exists some "b" who created "c". ...there exists some "c" created by 2 people. Get the name of all matching "c" projects. g.V().has("name","gremlin"). repeat(in("manages")). until(has("title","ceo")). path().by("name") Get the managers from Gremlin to the CEO in the hiearchy. Get the vertex with the name "gremlin." Traverse up the management chain... ...until a person with the title of CEO is reached. Get name of the managers in the path traversed. g.V().has("name","gremlin").as("a"). out("created").in("created"). where(neq("a")). groupCount().by("title") Get the distribution of titles amongst Gremlin's collaborators. Get the vertex with the name "gremlin" and label it "a." Get Gremlin's created projects and then who created them... ...that are not Gremlin. Group count those collaborators by their titles. g.V().has("name","gremlin"). out("bought").aggregate("stash"). in("bought").out("bought"). where(not(within("stash"))). groupCount().order(local).by(values,decr) Get a ranked list of relevant products for Gremlin to purchase. Get the vertex with the name "gremlin." Get the products Gremlin has purchased and save as "stash." Who else bought those products and what else did they buy... ...that Gremlin has not already purchased. Group count the products and order by their relevance. g.V().hasLabel("person"). pageRank(). by("friendRank"). by(outE("knows")). order().by("friendRank",decr). limit(10) Get the 10 most central people in the knows-graph. Get all people vertices. Calculate their PageRank using knows-edges. Order the people by their friendRank score. Get the top 10 ranked people. https://github.com/thinkaurelius/titan/ bandwidth (max possible) vs throughput (actual: narrowing line, splitting betn various devices, upload & downloading) rps https://en.wikipedia.org/wiki/Web_server#requests_per_second https://stackoverflow.com/questions/16952625/how-can-a-web-server-handle-multiple-users-incoming-requests-at-a-time-on-a-sin load-balancing. https://stackoverflow.com/a/38261380/984471 https://stackoverflow.com/a/48878168/984471 https://stackoverflow.com/a/48878548/984471 https://stackoverflow.com/a/48878955/984471 asynchronous input/output. it is possible to start the communication and then perform processing that does not require that the I/O be completed. Any task that depends on the I/O having completed still needs to wait for the I/O operation to complete, and thus is still blocked, but other processing that does not have a dependency on the I/O operation can continue. Input and output (I/O) operations on a computer can be extremely slow compared to the processing of data. An I/O device can incorporate mechanical devices that must physically move, such as a hard drive seeking a track to read or write; this is often orders of magnitude slower than the switching of electric current. For example, during a disk operation that takes ten milliseconds to perform, a processor that is clocked at one gigahertz could have performed ten million instruction-processing cycles. Event-driven architecture Event-driven architecture (EDA), is a software architecture pattern promoting the production, detection, consumption of, and reaction to events. https://en.wikipedia.org/wiki/Scalability Scalability is the capability of a system, network, or process to handle a growing amount of work, or its potential to be enlarged to accommodate that growth.[1] A system whose performance improves after adding hardware, proportionally to the capacity added, is said to be a scalable system. For example, a system is considered scalable if it is capable of increasing its total output under an increased load when resources (typically hardware) are added. An algorithm, design, networking protocol, program, or other system is said to scale if it is suitably efficient and practical when applied to large situations (e.g. a large input data set, a large number of outputs or users, or a large number of participating nodes in the case of a distributed system). If the design or system fails when a quantity increases, it does not scale. In practice, if there are a large number of things (n) that affect scaling, then resource requirements (for example, algorithmic time-complexity) must grow less than n-squared as n increases. An example is a search engine, which scales not only for the number of users, but also for the number of objects it indexes. Scalability refers to the ability of a site to increase in size as demand warrants.[3] Scalability can be measured in various dimensions, such as: Administrative scalability: The ability for an increasing number of organizations or users to easily share a single distributed system. Functional scalability: The ability to enhance the system by adding new functionality at minimal effort. Geographic scalability: The ability to maintain performance, usefulness, or usability regardless of expansion from concentration in a local area to a more distributed geographic pattern. Load scalability: The ability for a distributed system to easily expand and contract its resource pool to accommodate heavier or lighter loads or number of inputs. Alternatively, the ease with which a system or component can be modified, added, or removed, to accommodate changing load. Generation scalability: The ability of a system to scale up by using new generations of components. Thereby, heterogeneous scalability is the ability to use the components from different vendors.[4] A routing protocol is considered scalable with respect to network size, if the size of the necessary routing table on each node grows as O(log N), where N is the number of nodes in the network. A scalable online transaction processing system or database management system is one that can be upgraded to process more transactions by adding new processors, devices and storage, and which can be upgraded easily and transparently without shutting it down. Some early peer-to-peer (P2P) implementations of Gnutella had scaling issues. Each node query flooded its requests to all peers. The demand on each peer would increase in proportion to the total number of peers, quickly overrunning the peers' limited capacity. Other P2P systems like BitTorrent scale well because the demand on each peer is independent of the total number of peers. There is no centralized bottleneck, so the system may expand indefinitely without the addition of supporting resources (other than the peers themselves). The distributed nature of the Domain Name System allows it to work efficiently even when all hosts on the worldwide Internet are served, so it is said to "scale well". scale horizontally (or scale out/in) means to add more nodes to (or remove nodes from) a system, such as adding a new computer to a distributed software application. scale vertically (or scale up/down) means to add resources to (or remove resources from) a single node in a system, typically involving the addition of CPUs or memory to a single computer . https://en.wikipedia.org/wiki/Chroot A chroot on Unix operating systems is an operation that changes the apparent root directory for the current running process and its children. A program that is run in such a modified environment cannot name (and therefore normally cannot access) files outside the designated directory tree. The term "chroot" may refer to the chroot(2) system call or the chroot(8) wrapper program. The modified environment is called a chroot jail. https://en.wikipedia.org/wiki/Operating-system-level_virtualization Operating-system-level_virtualization *In operating-system-level virtualization, a physical server is virtualized at the operating system level, enabling multiple isolated and secure virtualized servers to run on a single physical server. The "guest" operating system environments share the same running instance of the operating system as the host system. Thus, the same operating system kernel is also used to implement the "guest" environments, and applications running in a given "guest" environment view it as a stand-alone system. The pioneer implementation was FreeBSD jails; other examples include Docker, Solaris Containers, OpenVZ, Linux-VServer, LXC, AIX Workload Partitions, Parallels Virtuozzo Containers, and iCore Virtual Accounts. Operating-system-level virtualization, also known as containerization, refers to an operating system feature in which the kernel allows the existence of multiple isolated user-space instances. Such instances, called containers,[1] partitions, virtualization engines (VEs) or jails (FreeBSD jail or chroot jail), may look like real computers from the point of view of programs running in them. A computer program running on an ordinary person's computer's operating system can see all resources (connected devices, files and folders, network shares, CPU power, quantifiable hardware capabilities) of that computer. However, programs running inside a container can only see the container's contents and devices assigned to the container. On Unix-like operating systems, this feature can be seen as an advanced implementation of the standard chroot mechanism, which changes the apparent root folder for the current running process and its children. In addition to isolation mechanisms, the kernel often provides resource-management features to limit the impact of one container's activities on other containers. On ordinary operating systems for personal computers, a computer program can see (even though it might not be able to access) all the system's resources. They include: Hardware capabilities that can be employed, such as the CPU and the network connection Data that can be read or written, such as files, folders and network shares Connected peripherals which can be interacted, such as webcam, printer, scanner, or fax *With operating-system-virtualization, or containerization, it is possible to run programs within containers, to which only parts of these resources are allocated. A program expecting to see the whole computer, once run inside a container, can only see the allocated resources and believes them to be all that is available. Several containers can be created on each operating system, to each of which a subset of the computer's resources is allocated. Each container may contain any number of computer programs. These programs may run concurrently or separately, even interact with each other. Containerization has similarities to application virtualization: application virtualization, only one computer program is placed in an isolated container and the isolation applies to file system only. Live migration https://en.wikipedia.org/wiki/Live_migration Docker is a tool that can package an application and its dependencies in a virtual container that can run on any Linux server. This helps enable flexibility and portability on where the application can run, whether on premises, public cloud, private cloud, bare metal, etc.[14] Docker is a software technology providing operating-system-level virtualization also known as containers, promoted by the company Docker, Inc.[6] Docker provides an additional layer of abstraction and automation of operating-system-level virtualization on Windows and Linux.[7] Docker uses the resource isolation features of the Linux kernel such as cgroups and kernel namespaces, and a union-capable file system such as OverlayFS and others[8] to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines (VMs).[9] The Linux kernel's support for namespaces mostly[10] isolates an application's view of the operating environment, including process trees, network, user IDs and mounted file systems, while the kernel's cgroups provide resource limiting, including the CPU, memory, block I/O, and network. https://en.wikipedia.org/wiki/Application_virtualization Application virtualization, "virtualization" refers to the artifact being encapsulated (application), which is quite different from its meaning in hardware virtualization, where it refers to the artifact being abstracted (physical hardware). Full application virtualization requires a virtualization layer.[2] Application virtualization layers replace part of the runtime environment normally provided by the operating system. The layer intercepts all disk operations of virtualized applications and transparently redirects them to a virtualized location, often a single file.[3] The application remains unaware that it accesses a virtual resource instead of a physical one. Since the application is now working with one file instead of many files spread throughout the system, it becomes easy to run the application on a different computer and previously incompatible applications can be run side-by-side. Example: Wine allows some Microsoft Windows applications to run on Linux. https://en.wikipedia.org/wiki/Web_application web application or web app is a client–server computer program in which the client (including the user interface and client-side logic) runs in a web browser.[1] Common web applications include webmail, online retail sales, online auctions, wikis, instant messaging services and many other functions. Example: gmail, yahoo mail amazon An auction is a process of buying and selling goods or services by offering them up for bid, taking bids, and then selling the item to the highest bidder. online auction is an auction which is held over the internet wiki is a website on which users collaboratively modify content and structure directly from the web browser. Instant messaging (IM) is a type of online chat that offers real-time text transmission over the Internet. Web sites most likely to be referred to as "web applications" are those which have similar functionality to a desktop software application, or to a mobile app. HTML5 introduced explicit language support for making applications that are loaded as web pages, but can store data locally and continue to function while offline. Single-page applications are more application-like because they reject the more typical web paradigm of moving between distinct pages with different URLs. Single-page frameworks like Sencha Touch and AngularJS might be used to speed development of such a web app for a mobile platform. https://en.wikipedia.org/wiki/Distributed_transaction A distributed transaction is a database transaction in which two or more network hosts are involved. Usually, hosts provide transactional resources, while the transaction manager is responsible for creating and managing a global transaction that encompasses all operations against such resources. A common algorithm for ensuring correct completion of a distributed transaction is the two-phase commit (2PC). This algorithm is usually applied for updates able to commit in a short period of time, ranging from couple of milliseconds to couple of minutes. There are also long-lived distributed transactions, for example a transaction to book a trip, which consists of booking a flight, a rental car and a hotel. Since booking the flight might take up to a day to get a confirmation, two-phase commit is not applicable here, it will lock the resources for this long. In this case more sophisticated techniques that involve multiple undo levels are used. The way you can undo the hotel booking by calling a desk and cancelling the reservation, a system can be designed to undo certain operations (unless they are irreversibly finished). In practice, long-lived distributed transactions are implemented in systems based on Web Services. Usually these transactions utilize principles of Compensating transactions, Optimism and Isolation Without Locking. X/Open standard does not cover long-lived DTP. Several modern technologies, including Enterprise Java Beans (EJBs) and Microsoft Transaction Server (MTS) fully support distributed transaction standards. https://en.wikipedia.org/wiki/Two-phase_commit_protocol In transaction processing, databases, and computer networking, the two-phase commit protocol (2PC) is a type of atomic commitment protocol (ACP). It is a distributed algorithm that coordinates all the processes that participate in a distributed atomic transaction on whether to commit or abort (roll back) the transaction (it is a specialized type of consensus protocol) In a "normal execution" of any single distributed transaction ( i.e., when no failure occurs, which is typically the most frequent situation), the protocol consists of two phases: The commit-request phase (or voting phase), in which a coordinator process attempts to prepare all the transaction's participating processes (named participants, cohorts, or workers) to take the necessary steps for either committing or aborting the transaction and to vote, either "Yes": commit (if the transaction participant's local portion execution has ended properly), or "No": abort (if a problem has been detected with the local portion), and The commit phase, in which, based on voting of the cohorts, the coordinator decides whether to commit (only if all have voted "Yes") or abort the transaction (otherwise), and notifies the result to all the cohorts. The cohorts then follow with the needed actions (commit or abort) with their local transactional resources (also called recoverable resources; e.g., database data) and their respective portions in the transaction's other output (if applicable). desktop application vs client Programs that run on a user's local computer without ever sending or receiving data over a network are not considered clients, and so the operations of such programs would not be considered client-side operations. https://en.wikipedia.org/wiki/Server-side_scripting asp jsp php nodejs python https://en.wikipedia.org/wiki/Node.js Node.js is an open-source, cross-platform JavaScript run-time environment for executing JavaScript code server-side Node.js has an event-driven architecture capable of asynchronous I/O. These design choices aim to optimize throughput and scalability in Web applications with many input/output operations, as well as for real-time Web applications (e.g., real-time communication programs and browser games).[6] dns https://www.youtube.com/watch?v=2ZUxoi7YNgs proof by induction induction hypothesis #NAME? Q.E.D. #NAME? http://comet.lehman.cuny.edu/sormani/teaching/induction.html Proofs by Induction A proof by induction is just like an ordinary proof in which every step must be justified. However it employs a neat trick which allows you to prove a statement about an arbitrary number n by first proving it is true when n is 1 and then assuming it is true for n=k and showing it is true for n=k+1. The idea is that if you want to show that someone can climb to the nth floor of a fire escape, you need only show that you can climb the ladder up to the fire escape (n=1) and then show that you know how to climb the stairs from any level of the fire escape (n=k) to the next level (n=k+1). If you've done proof by induction before you may have been asked to assume the n-1 case and show the n case, or assume the n case and show the n+1 case. unix commands https://www.google.co.in/search?q=unix+commands&oq=unix+commands&aqs=chrome..69i57j0l5.9894j0j7&sourceid=chrome&ie=UTF-8 http://searchnetworking.techtarget.com/definition/time-to-live regex https://www.w3schools.com/jsref/jsref_obj_regexp.asp