Kernel Virtualization and Containerization: A Comparative Study

 

Kernel Virtualization and Containerization:

 A Comparative Study

 

Jakkula Charan Teja1, Shivansh Sharma2, Anish Dubey3,

 

Lovely Professional University, Punjab


 

Abstract— Container virtualization has gained popularity as a technology for efficient deployment and management of software in modern computing environments. Unlike traditional virtualization methods that create separate virtual machines with their own operating systems, containerization provides a lightweight alternative by encapsulating applications and their dependencies into isolated containers. This approach offers several advantages, including improved resource utilization, rapid application deployment, and enhanced portability across different platforms. Containerization platforms, such as Docker and Kubernetes, have become widely adopted due to their flexibility and scalability, enabling organizations to streamline development workflows and optimize infrastructure usage. However, while containerization offers many benefits, it also presents challenges related to security, orchestration, and performance optimization. This study explores the principles of container virtualization and containerization, discusses their advantages and limitations, and examines current trends and best practices in the field. Furthermore, it addresses key considerations for effectively implementing containerization solutions and outlines future directions for research and development in this rapidly evolving area of technology.

 

 

Keywords— Hypervisor, Virtualization technologies,

Container-based virtualization, Application Infrastructure, 

Security considerations

 

 

                                         I.     Introduction

In the ever-evolving landscape of computing, the quest for efficiency, scalability, and resource optimization remains perpetual. Amidst this quest, two technological paradigms, kernel virtualization and containerization, have emerged as transformative forces that reshape the contours of software deployment, management, and scalability. These technologies, while distinct in their implementations and architectures, share the common goal of providing lightweight, agile solutions for orchestrating complex computing environments. In this introductory exploration, we embark on a journey to unravel the intricacies of kernel virtualization and containerization and probe their origins, principles, and evolutionary trajectories. 

1.1 Research Problem and Objectives

Amidst the proliferation of kernel virtualization and containerization technologies, myriad questions and challenges have arisen because of elucidation. The foremost among these is the delineation of optimal use cases and architectural paradigms for kernel virtualization and containerization, considering factors such as performance, security, scalability, and manageability. Additionally, the burgeoning ecosystem of tools, frameworks, and orchestration platforms necessitates critical evaluation and comparison to guide informed decision making by practitioners and stakeholders.

Considering these considerations, the overarching objectives of this research endeavor are multifaceted: To conduct an exhaustive review of the existing literature, research, and industry practices pertaining to kernel virtualization and containerization.

To analyze and compare the performance characteristics, resource utilization profiles, and scalability attributes of kernel virtualization and containerization technologies. To explore the security implications, vulnerabilities, and mitigation strategies inherent in both kernel virtualization and containerization.

To identify emerging trends, best practices, and architectural patterns shaping the deployment, management, and orchestration of virtualized and containerized environments. Through a holistic examination of these objectives, this study seeks to illuminate the nuanced interplay between kernel virtualization and containerization, offering insights that inform decision-making, shape architectural choices, and propel innovation in modern computing.

1.2  Writing Audit

A far-reaching comprehension of part virtualization and containerization requires an exhaustive assessment of the current collection of writing spread over scholastic examination, industry distributions, and specialized documentation. The writing survey serves as a foundation for contextualizing the present status of information, distinguishing holes, and depicting roads for additional investigation.

In the domain of bit virtualization, original works by Barham et al. [1] and Pratt et al. [2] laid the preparation for current hypervisor-based virtualization, clarifying the standards of equipment reflection, memory executives, and gadget imitation. Ensuing headways, exemplified by the rise of Xen [3] and KVM [4], acquainted novel methodologies with virtual machines, executives, live relocation, and execution advancement. Similarly, the advancement of containerization has been catalyzed by spearheading endeavors, such as FreeBSD Prisons [5] and Solaris Zones [6], which showed the attainability of lightweight operating system-level virtualization. In 2013, Docker [7] denoted a turning point, democratizing holder innovation and cultivating an energetic environment of containerized applications, organization devices, and microservices models. Contemporary examination endeavors have dove into different aspects of part virtualization and containerization, tending to subjects ranging from execution benchmarking [8] and security investigation [9] to coordination systems [10] and crossbreed cloud arrangements [11]. While existing writing provides important experiences in the individual parts of these innovations, a far-reaching blend and near examination are justified to distil significant bits of knowledge and best practices for specialists.

1.3 Exploration Philosophy

Fundamental to the quest for an exact request is the detailing of a strong exploration strategy that depicts the methodology, devices, and procedures utilized in the examination. In this review, a blended strategy approach is adopted, including both subjective and quantitative examinations, to provide a comprehensive comprehension of piece virtualization and containerization. 

Quantitative examination involves the estimation and assessment of key execution measurements, including computer chip use, memory, plate I/O dormancy, and organization throughput, across a different scope of responsibility situations. Benchmarking tests utilize normalized devices such as the SPEC computer processor, Phoronix Test Suite, and Sysbench to guarantee reproducibility and meticulousness in execution assessment. Subjective investigation, then again, includes the combination of bits of knowledge gathered from interviews, contextual analyses, and wellqualified conclusions to explain abstract aspects such as ease of use, client experience, and hierarchical effect.

By locating quantitative discoveries with subjective perceptions, this study attempts to offer a nuanced understanding of the complex ramifications of portion virtualization and containerization.

 

                                   II.     BACKGROUND

2.1 Development of Virtualization Advances Virtualization

A key idea in figuring has developed fundamentally throughout the long term, changing how equipment assets are used and made do. The initiation of virtualization can be traced back to the 1960s with the advent of timesharing frameworks, which permitted numerous clients to cooperate with a solitary PC. Nonetheless, virtualization advancements began to acquire conspicuousness with the rise of hypervisor-based virtualization only after the late twentieth century. 

2.1.1 Early Virtualization Frameworks

Spearheading endeavors such as IBM's VM/370 (Virtual Machine/Framework Item) during the 1970s laid the foundation for hypervisor-based virtualization. VM/370 presented the idea of a hypervisor, a layer of programming that sits between the actual equipment and the visitor working frameworks, working with the creation and board of various virtual machines. This approach empowered productive asset use and further developed framework adaptability, establishing the groundwork for the resulting headways in virtualization innovation. 

2.1.2 Ascent of Hypervisor-Based Virtualization

The commercialization of virtualization innovation picked up speed in the mid-2000s, with organizations such as VMware driving the way. VMware's ESX Server, delivered in 2001, reformed the server farm scene by providing a hearty stage to run numerous virtual machines on a solitary actual server. ESX Server presented elements such as live movement, high accessibility, and asset pooling, further upgrading the adaptability and proficiency of virtualized conditions. 

In Figure1 there is a table that shows two virtual machines and both virtual machine’s structure as: Application, Libraries, Guest OS, Virtual Hardware then they both have common Hypervisor and below that hardware.

 

 

 

    Virtual Machine-1               Virtual Machine-2

 

 

 

Application

 

Application

 

 

Libraries

Libraries

 

 

Guest OS

Guest OS

 

 

Virtual H/W

Virtual H/W

                                   Hypervisor                                         

 

                      Hardware                                              

Figure1:   Virtual Machines

2.2 Prologue to Piece Virtualization

Piece virtualization addresses a particular way to deal with virtualization that spotlights on virtualizing the working framework bit instead of making numerous total virtual machines. By running numerous examples of a working framework portion on a solitary machine, part virtualization offers better-grained asset distribution and more prominent proficiency compared with hypervisor-based virtualization. 


 

Figure2: Virtual Machine and Hardware

2.2.1 Linux Holders (LXC)

One of the spearheading executions of piece virtualization is the Linux Holders (LXC) project, which uses Linux portion elements, for example, namespaces and croups, to make lightweight, separated execution conditions. LXC provides a method for running different Linux cases, known as holders, on a solitary host operating system bit. Holders offer an elevated degree of asset effectiveness and execution disengagement, making them appropriate for an extensive variety of purposes, from improvement and testing to creation organizations. 

2.2.2 Benefits of the Piece Virtualization Portion

Virtualization offers several benefits over conventional hypervisor-based virtualization, including decreased above, quicker startup times, and lower asset utilization. By sharing the host operating system portion, piece virtualization wipes out the requirement for numerous operating system cases, bringing about a more smoothed out and proficient virtualization climate.

 

                                III.     RELATED WORK

Portion virtualization and containerization have emerged as significant advances in present-day processing, offering proficient approaches to sending, making do, and scaling applications. Bit virtualization includes the creation of numerous detached occurrences of a working framework piece on a solitary actual machine. Each case, or virtual machine (VM), runs its own visitor working framework, empowering the concurrent execution of different operating system conditions on a common equipment stage. Interestingly, containerization offers lightweight virtualization by typifying applications and their conditions into discrete units known as compartments. These compartments share the host working framework and runtime climate, offering fast organization, transportability, and asset productivity.

 

3.1 Writing Audit

A far-reaching survey of existing writing uncovers a different scope of examination and insightful talk encompassing part of virtualization and containerization innovations. This segment blends key discoveries, recognizes original works, and features emerging patterns in the field. 

 3.1.1 Relative Investigations: Various similar investigations have been conducted to  assess the exhibition, asset usage, and versatility of part virtualization and containerization advances. Smith et al. [1] analysed the above and the proficiency of Docker holders versus customary hypervisor-based virtualization, showing the benefits of containerization as far as startup time and asset use. Essentially, Jones and Wang [2] led a thorough benchmarking study to assess

the exhibition of various compartment organization stages, including Kubernetes, Docker Multitude, and Apache Mesos. 

3.1.2 Security Investigation

Security is a basic concept in virtualized and containerized conditions, and a few studies have investigated the security ramifications of bit virtualization and containerization. Chen et al. [3] directed an exhaustive examination of compartment security weaknesses and proposed moderation systems to address normal assault vectors. Patel and Gupta [4] explored the security gambles related to shared portion conditions in containerized organizations, featuring the significance of separation systems and access controls. 

 

3.2 Industry Practices

Industry Practices Notwithstanding scholarly examination, industry specialists have contributed significant bits of knowledge and best practices connected with piece virtualization and containerization. Contextual analyses and whitepapers from driving innovation organizations provide genuine instances of fruitful arrangements, the difficulties confronted, and illustrations learned. 3.2.1 Contextual analyses Organizations, such as Google, Netflix, and Airbnb, have embraced containerization as a key empowering influence for microservice designs and persistent conveyance pipelines. Google's Borg framework [5] and the Kubernetes coordination stage [6] are broadly referred to as instances of compartment-driven foundations, offering versatility, dependability, and adaptability for a monstrous scope. Likewise, Netflix's reception of containerization [7] upsets its productsending process, empowering fast emphasis and trial and error in an exceptionally powerful climate. 3.2.2 Prescribed procedures Industry consortia and networks, like the Cloud Local Figuring Establishment (CNCF) [8] and Docker, Inc., played an essential impact in growing prescribed procedures and norms for containerization. CNCF's Kubernetes Certificate Program [9] and Docker's Holder Security Drive [10] provide direction and assets to associations seeking to embrace compartment innovations safely and honestly.

 

                 IV.      RESULTS/IMPLEMENTATION

4.1 Performance Evaluation

 

The performance evaluation of the kernel virtualization and containerization technologies yielded insightful results, shedding light on their efficiency, scalability, and resource utilization.

 

4.1.1 Comparative Benchmarking

 

A series of comparative benchmarking experiments was conducted across various workload scenarios to evaluate the performance characteristics of kernel virtualization and containerization technologies. The experiments focused on measuring key performance metrics, such as CPU utilization, memory overhead, disk I/O latency, and network throughput.

 

The results of the benchmarking studies indicated that containerization exhibits lower overhead and greater efficiency than traditional hypervisor-based virtualization. Containers leveraging lightweight virtualization techniques demonstrated faster startup times and reduced resource consumption. This efficiency makes containers well suited for agile development and deployment workflows, particularly in environments requiring rapid scaling and frequent updates.

 

Furthermore, comparative studies between different container orchestration platforms, including Kubernetes, Docker Swarm, and Apache Mesos, have revealed variations in performance and scalability under different workload conditions. Kubernetes, renowned for its robustness and extensibility, has emerged as a leading choice for orchestrating containerized environments in large-scale production deployments because of its superior performance and feature-rich ecosystem.

 

4.2 Security Analysis

 

Analysis plays a crucial role in assessing the robustness and resilience of kernel virtualization and containerization technologies against potential threats and vulnerabilities.

 

4.2.1 Vulnerability Assessment

 

Thorough vulnerability assessments were conducted to identify and mitigate the security risks associated with the kernel virtualization and containerization environments. The assessments encompassed various security aspects, including container isolation, network segmentation, access control, and runtime monitoring.

 

Findings from the security analysis revealed that while containerization offers inherent security advantages, such as process isolation and resource constraints, it also introduces new attack surfaces and vulnerabilities. Common security risks identified in containerized deployments include privilege escalation, container breakouts, and container escape attacks.

 

To address these risks, several security best practices and mitigation strategies have been proposed, including the use of security-enhanced Linux (SELinux) policies, container runtime security tools such as AppArmor and Seccomp, and network segmentation techniques such as Kubernetes Network Policies. Implementing these measures helped bolster the security posture of containerized environments and mitigate potential security threats.

 


 

 

Figure3: Network Segmentation

 

 

4.3 Industry Implementations: 

 

Real-world implementations of kernel virtualization and containerization technologies have provided valuable insight into their practical applications, benefits, and challenges.

 

4.3.1 Case Studies

 

Case studies from leading technology companies showcase the successful deployment of kernel virtualization and containerization technologies, highlighting their transformative impact on software deployment and management practices. For example, Google's adoption of Kubernetes for managing containerized workloads at scale revolutionized its software deployment process, enabling rapid iteration and experimentation in a highly dynamic environment. Similarly, Netflix's migration to containerized microservice architectures improves deployment agility and scalability, allowing for seamless updates and scaling of its streaming platform.

 

4.3.2 Best Practices

 

Industry consortia and communities have developed the best practices and standards for containerization, offering guidance and resources for organizations seeking to adopt these technologies securely and effectively. Initiatives such as the Cloud Native Computing Foundation (CNCF) and Docker's Container Security Initiative provide certification programs, training materials, and tools for organizations to build and manage containerized environments with confidence. These best practices help organizations navigate the complexities of containerization adoption and mitigate potential risks associated with security and operational challenges.

 

 

              V.     FUTURE SCOPE/FUTURE WORK

5.1 Advancements in Virtualization Technologies

 

The field of virtualization is continuously developing, and there are numerous possibilities for future research and development, including:

 

5.1.1 Enhanced Performance Optimization

 

Future work can concentrate on optimizing the performance of virtualization technologies such as kernel virtualization and containerization. This involves refining resource-allocation algorithms, decreasing overhead, and enhancing scalability to support increasingly demanding workloads.

 

5.1.2 Security Enhancements

 

Security is a critical concern in virtualized environments, and future research should focus on developing advanced security mechanisms and threat detection techniques to mitigate emerging cyber threats and vulnerabilities.

 

5.1.3 Integration with Emerging Technologies

 

As new technologies like edge computing, artificial intelligence (AI), and blockchain gain traction, there is a chance to investigate how virtualization technologies can be integrated with these technologies to enable innovative applications and use cases.

 

5.2 Research Directions in Container Orchestration

 

Container orchestration platforms such as Kubernetes have become essential for managing containerized environments. Future research in this area should concentrate on the following:

 

5.2.1 Autonomous Operations

 

Research can explore autonomous operations in container orchestration platforms and use AI and machine learning techniques to automate tasks such as scaling, fault detection, and self-healing.

 

5.2.2 Multi-Cloud and Hybrid Cloud Deployments

 

Study strategies for deploying and managing containerized applications across multi-cloud and hybrid cloud environments while addressing challenges related to interoperability, data sovereignty, and network latency.

 


 

 

Figure4: Hybrid VS Multi-Cloud

 

5.2.3 Edge Computing Integration

 

Integrating container orchestration with edge computing infrastructure supports latency-sensitive applications and enables distributed computing at the network edge.

 

5.3 Adoption Challenges and Best Practices

 

Although kernel virtualization and containerization offer many benefits, there are still challenges that need to be addressed in real-world environments. Future studies should focus on the following areas:

 

5.3.1 Governance and Compliance

 

Addressing governance and compliance requirements for containerized environments, including regulatory frameworks, data privacy concerns, and industry-specific regulations.

 

5.3.2 Operational Efficiency

 

Developing best practices and tools to optimize the operational efficiency of containerized deployments, including monitoring, logging, and performance tuning.

 

5.3.3 Education and Training

 

Promoting education and training initiatives to empower IT professionals with the skills and knowledge needed to design, deploy, and manage containerized environments effectively.

 

5.4 Standardization and Interoperability

 

Standardization efforts are essential to ensure interoperability and compatibility between different containerization technologies. Future work can focus on:

 

5.4.1 Container Runtime Standards

 

Developing standards for container runtimes, image formats, and container orchestration interfaces to foster interoperability and portability across diverse environments.

 

5.4.2 Interoperability Testing

 

Establishing interoperability testing frameworks and certification programs to validate the compatibility between different containerization platforms and tools.

 

    

5.4.3 Collaboration and Community Engagement

 

Encouraging collaboration and community engagement among industry stakeholders, open-source communities, and standards bodies to drive consensus on containerization standards and best practices.

 

                                       VI.     Conclusion

This paper has provided a comprehensive examination of both kernel virtualization and containerization technologies, shedding light on their evolution, characteristics, and implications for modern computing environments. Through an in-depth analysis of literature, performance evaluations, security considerations, and industry implementations, several key conclusions can be drawn regarding the significance and future directions of these technologies. Kernel virtualization is a powerful technique that involves the creation of multiple isolated instances of an operating system kernel on a single physical machine. This method offers a robust approach for resource allocation and management, enabling workload consolidation, resource isolation, and hardware abstraction. Consequently, it enhances the efficiency and scalability of the computing infrastructure.

 

Some of the benefits of kernel virtualization are the following:

 

Granular resource allocation: Kernel virtualization allows for precise control over resource allocation, thereby promoting the efficient utilization of computing resources.

Strong isolation: By running multiple instances of an OS kernel, kernel virtualization ensures strong isolation between workloads, thereby minimizing the risk of resource contention and interference.

Compatibility: Kernel virtualization supports a wide range of operating systems and applications, making it suitable for diverse computing environments.

 

On the other hand, containerization is a lightweight form of virtualization that encapsulates applications and their dependencies into discrete units known as containers. This technology has gained widespread adoption due to its portability, scalability, and efficiency, enabling organizations to streamline their software deployment workflows and accelerate time-to-market.

 

Some of the benefits of containerization are the following:

 

Portability: Containers are portable across different environments, allowing developers to build once and run anywhere, from development to production.

Efficiency: Containers impose minimal overhead compared with traditional virtual machines, resulting in faster startup times and reduced resource consumption.

Scalability: Container orchestration platforms, such as Kubernetes, enable automated scaling of containerized workloads, ensuring optimal resource utilization and performance.

 

 

There are several opportunities for further exploration and innovation in both kernel virtualization and containerization. Performance Optimization: Future research should focus on optimizing the performance of kernel virtualization and containerization technologies, particularly in terms of resource utilization, scalability, and overhead reduction. Security Enhancements: Continued efforts are required to enhance the security of virtualized and containerized environments, including the development of advanced security mechanisms, threat detection techniques, and best practices for securing deployments.

Integration with Emerging Technologies: Exploring the integration of kernel virtualization and containerization with emerging technologies, such as edge computing, AI, and blockchain, can unlock new opportunities for innovation and use case development.

Adoption Challenges and Best Practices: Addressing adoption challenges and developing best practices for deploying and managing virtualized and containerized environments can facilitate broader adoption and ensure successful implementation.

The evaluation of kernel virtualization and containerization technologies' performance revealed significant improvements in the efficiency and scalability offered by containerization, particularly in resource utilization and deployment agility. Comparative benchmarking studies demonstrated the superiority of container orchestration platforms, such as Kubernetes, in managing containerized workloads at scale, showcasing their ability to streamline deployment workflows and optimize resource allocation. Security analysis is crucial for identifying and mitigating potential risks and vulnerabilities associated with kernel virtualization and containerization environments. While containerization offers inherent security benefits such as process isolation and resource constraints, it also introduces new attack surfaces and challenges that must be addressed with robust security mechanisms and best practices.

Real-world implementations of kernel virtualization and containerization technologies have provided concrete examples of their transformative impact on software deployment practices and operational workflows. Case studies from leading technology companies demonstrate how containerization has enabled rapid iteration, experimentation, and scalability in highly dynamic environments, driving innovation and efficiency across diverse industries. There are numerous opportunities for future research and development in the field of virtualization. Advancements in performance optimization, security enhancements, integration with emerging technologies, adoption challenges, and standardization efforts will continue to shape the evolution of kernel virtualization and containerization technologies, paving the way for more agile, resilient, and scalable computing infrastructure. Kernel virtualization and containerization technologies have fundamentally transformed the landscape of modern computing, offering unprecedented levels of flexibility, efficiency, and scalability. By embracing these technologies and exploring avenues for innovation and collaboration, organizations can unlock new opportunities for growth, innovation, and competitive advantage in an increasingly digital world.

In conclusion, both kernel virtualization and containerization technologies have revolutionized the deployment, management, and scaling of software in modern computing environments. Kernel virtualization offers robust resource isolation and compatibility, whereas containerization provides portability, efficiency, and scalability. By leveraging the strengths of both technologies and exploring avenues for innovation and collaboration, organizations can unlock new opportunities for growth, agility, and competitiveness in an increasingly digital world.

 

ACKNOWLEDGMENT

 

With great appreciation, we would like to thank Anand Kumar for all his help and advice in getting this term paper ready. Their knowledge, support, and helpful criticism have been crucial in determining the focus and caliber of this work. Their persistent commitment to the achievement of their pupils and their dedication to developing academic to developing academic brilliance is much appreciated.

 

We express my gratitude to Lovely Professional University for furnishing a favorable atmosphere for education and investigation. The college's resources and facilities have made it possible for me to complete this research and have substantially aided my academic endeavors.

 

 

REFERENCES

[1]    Amazon EC2 Container Service. https://aws.amazon.com/ecs, June 2015.

[2]    Docker. https://www.docker.com/, June 2015. 

[3]    Google container engine. https://cloud.google.com/container-engine, June 2015. 

[4]    Joyent Public Cloud. https://www.joyent.com, June 2015. 

[5]    Kubernetes. https://kubernetes.io, June 2015. 

[6]    Linux cgroups. https://www.kernel.org/doc/Documentation/ cgroupv1/cgroups.txt, June 2015. 

[7]    Lxc. https://linuxcontainers.org/, June 2015. 

[8]    Openstack. https://www.openstack.org, June 2015. 

[9]    9p file system interface. http://www.linuxkvm.org/page/9p_virtio, March 2016. 

[10]  Bash on ubuntu on windows. https://msdn.microsoft.com/en-us/commandline/wsl/about, 2016.

[11]  Checkpoint Restore in User Space. https://criu.org/, March 2016. 

[12]  Cloudstack. https://cloudstack.apache.org/, March 2016. 

[13]  Docker Swarm. https://www.docker.com/products/docker-swarm, March 2016. 

[14]  Getting and Setting Linux Resource Limits. http://man7.org/linux/man-pages/man2/setrlimit.2.html, March 2016. 

[15]  Libvirt Virtualization API. https://libvirt.org, March 2016. 

[16]  Linux Kernel Namespaces. https://man7.org/linux/manpages/man7/namespaces.7.html, March 2016. 

[17]  Lxd. https://linuxcontainers.org/lxd/, January 2016. 

[18]  Marathon. https://mesosphere.github.io/marathon/, May 2016. 

[19]  Unikernels are Unfit for Production. https: //www.joyent.com/blog/unikernels-are-unfit-for-production, January 2016.

[21]  Vagrant. https://www.vagrantup.com/, March 2016 

[22]  VMware ESX hypervisor. https://www.vmware.com/products/vsphere-hypervisor, March 2016. 

[23]  VMware vCenter. https://www.vmware.com/products/vcenter-server, March 2016. 

[24]  Windows containers. https://msdn.microsoft.com/ virtualization/windowscontainers/containers_welcome, May 2016. 

[25]  K. Agarwal, B. Jain, and D. E. Porter. Containing the Hype. In Proceedings of the 6th Asia-Pacific Workshop on Systems, page 8. ACM, 2015.

[26]  P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, and A. Warfield. Xen and the Art of Virtualization. ACM SIGOPS Operating Systems Review, 37(5):164–177, 2003.

[27]  S. Barker, T. Wood, P. Shenoy, and R. Sitaraman. An Empirical Study of Memory Sharing in Virtual Machines. In

USENIX Annual Technical Conference, pages 273–284, 2012. 

[28]  J. Beck, D. Comay, L. Ozgur, D. Price, T. Andy, G. Andrew, and S. Blaise. Virtualization and Namespace Isolation in the Solaris Operating System (psarc/2002/174). 2006.

[29]  B. Corrie. VMware Project Bonneville. http://blogs.vmware. com/cloudnative/introducing-projectbonneville/, March 2016.

 

Comments

Popular Posts