Advertisement
opexxx

CCSK_glossary

Nov 29th, 2021
179
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 36.25 KB | None | 0 0
  1. Module 1
  2. Internet of Things
  3. Internet of Things is a blanket term for non-traditional computing devices used in the physical world that utilize Internet connectivity. It includes everything from Internet-enabled operational technology (used by utilities like power and water) to fitness trackers, connected light bulbs, medical devices, and beyond.
  4. Cloud Computing
  5. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
  6. Resource Pools
  7. Resources pools are how we build clouds. They are a collection of physical resources that are pooled together so a consumer of the cloud can pull resources from the pool, use them while they need them, and put them back in the pool for someone else to use them over time.
  8. Broad Network Access
  9. All resources are available over a network, without any need for direct physical access; the network is not necessarily part of the service.
  10. Rapid Elasticity
  11. Allows consumers to expand or contract the resources they use from the pool (provisioning and deprovisioning), often completely automatically. This allows them to more closely match resource consumption with demand (for example, adding virtual servers as demand increases, then shutting them down when demand drops).
  12. Measured Service
  13. meters what is provided, to ensure that consumers only use what they are allotted, and, if necessary, to charge them for it. This is where the term utility computing comes from, since computing resources can now be consumed like water and electricity, with the client only paying for what they use.
  14. Abstraction
  15. Also known as virtualization, abstraction separates resources from their underlying structure, allowing creation of resource pools from those underlying assets.
  16. Automation
  17. Also known as orchestration, allows rapidly provisioning and deprovisioning of resources from the resource pool.
  18. Multitenancy
  19. An emergent property of resource pooling that requires strong segregation and isolation.
  20. Governance
  21. The overall management model of the cloud, including contracts, service levels and policies.
  22. Isolation
  23. The concept that one segment of consumers in the cloud should not ever see anything running in another segment. A core control that allows for multiple tenants to safely share resource pools.
  24. Segmentation
  25. How the provider divides the cloud among different tenants.
  26. Infrastructure as a Service (IaaS)
  27. The most foundational of the service models, provides resource pools of virtualized infrastructure such as compute, network, or storage pools. Includes facilities, hardware, abstraction, core connectivity and delivery, and APIs.
  28. Software as a Service (SaaS)
  29. Fully abstracts everything except the application itself. Cloud consumers use the application but have no insight or management of the underlying resources. Consumers access it with a web browser, mobile app, or a lightweight client app.
  30. Platform as a Service (PaaS)
  31. Abstracts and provides development or application platforms, such as databases, application platforms (e.g. a place to run Python, PHP, or other code), file storage and collaboration, or even proprietary application processing (such as machine learning, big data processing, or direct Application Programming Interfaces (API) access to features of a full SaaS application). The key differentiator is that, with PaaS, you don’t manage the underlying servers, networks, or other infrastructure.
  32. Application Programming Interfaces (APIs)
  33. APIs are typically the underlying communications method for components within a cloud, some of which (or an entirely different set) are exposed to the cloud user to manage their resources and configurations. Most cloud APIs these days use REST (Representational State Transfer), which runs over the HTTP protocol, making it extremely well suited for Internet services.
  34. Cloud Deployment Models
  35. Deployment models describe how the cloud is offered to customers.
  36. Public Cloud: Open to anyone who signs up for the service, so cloud provider is responsible for keeping consumers isolated. You are only responsible for what you deploy in the cloud.
  37. Private Cloud: Reserved for trusted users, operated for a single organization on- or off-premises. You are responsible for securing all the hardware and software that makes up the cloud platform.
  38. Hybrid Cloud: Connects on-premises resources to a public cloud deployment. A combination of two or more clouds that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability.
  39. Community Cloud: The cloud infrastructure that is made available to the general public or a large industry group and owned by the organization selling cloud services.
  40. Logical Model
  41. Another way of thinking about IT assets. Includes the following areas:
  42. Infrastructure: The foundation and core components of a computing system (compute, network, and storage).
  43. Metastructure: The protocols and mechanisms that provide the interface between the infrastructure layer and the other layers.
  44. Applistructure: Data and information; the database content.
  45. Infostructure: The applications deployed in the cloud and the underlying services used to build them.
  46. Shared Responsibilities Model
  47. The heart of cloud security; where the responsibilities between cloud providers and consumers are determined. Responsibilities are not always set in stone, but this is the typical way responsibilities are delegated:
  48. Provider: responsible for physical infrastructure, virtualization/abstraction, and application and PaaS services.
  49. Consumer: responsible for host/server security, network, IAM, and metastructure configuration security, data and application security.
  50. Management Plane / Metastructure
  51. Interface between provider and consumer in the shared responsibilities model. How you access and control your cloud, provision and configure resources, start/stop/terminate services. Can access via web console or REST-/web-based APIs.
  52.  
  53. Module 2
  54. Virtual Networks
  55. Most common types include:
  56. VLAN: Virtual LAN, leverages existing technologies in all networks to segregate (not isolate) networks in single-tenant environments. Not effective as a security barrier.
  57. Software-Defined Networks (SDNs): Preferred networks, they provide better isolation and security; they decouple the control plane from the underlying physical network and offer more flexibility.
  58. Software Defined Networking (SDN)
  59. A more complete abstraction layer on top of networking hardware, SDNs decouple the network control plane from the data plane. This allows us to abstract networking from the traditional limitations of a LAN.
  60. Security Groups
  61. Common name for firewalling built into SDNs, security groups provide ability to manage the network firewall with the granularity of a host firewall. It is policy-based, typically default-deny, integrated in core SDN logic, and applied on a per-asset level.
  62. Bastion Virtual Network
  63. A bastion virtual network is a special purpose computer on a network specifically designed and configured to withstand attacks.
  64. Transit Virtual Private Cloud (VPC)
  65. Connects multiple, geographically disperse VPCs and remote networks in order to create a global network transit center. Simplifies network management and minimizes the number of connections required to connect multiple VPCs and remote networks.
  66. Software-Defined Perimeter (SDP)
  67. A model and specification that combines device and user authentication to dynamically provision network access to resources and enhance security. SDP includes three components:
  68. An SDP client on the connecting asset (e.g. a laptop).
  69. The SDP controller for authenticating and authorizing SDP clients and configuring the connections to SDP gateways.
  70. The SDP gateway for terminating SDP client network traffic and enforcing policies in communication with the SDP controller.
  71. Workload
  72. A unit of processing, which can be in a virtual machine, a container, or other abstraction. Workloads always run somewhere on a processor and consume memory. Workloads include a very diverse range of processing tasks, which range from traditional applications running in a virtual machine on a standard operating system, to GPU- or FPGA-based specialized tasks. Nearly every one of these options is supported in some form in cloud computing.
  73. Immutable Workloads
  74. Immutable infrastructure is an approach to managing services and software deployments on IT resources wherein components are replaced rather than changed. An application or services is effectively redeployed each time any change occurs. This is the preferred workload when possible.
  75. Virtual Machine
  76. Virtual machines are the most-well known form of compute abstraction, and are offered by all IaaS providers. They are commonly called instances in cloud computing since they are created (or cloned) off a base image.
  77. Containers
  78. Containers are code execution environments that run within an operating system (for now), sharing and leveraging resources of that operating system. It is a constrained place to run segregated processes while still utilizing the kernel and other capabilities of the base OS. Multiple containers can run on the same virtual machine or be implemented without the use of VMs at all and run directly on hardware.
  79. Platform-Based Workloads
  80. These are workloads running on a shared platform that aren’t virtual machines or containers, such as logic/procedures running on a shared database platform.
  81. Serverless
  82. Any situation where the cloud user doesn’t manage any of the underlying hardware or virtual machines, and just accesses exposed functions. Serverless covers containers and platform-based workloads, where the cloud provider manages all the underlying layers, including foundational security functions and controls.
  83. Business Continuity / Disaster Recovery (BC/DR)
  84. Should cover the entire stack of the logical model. Three aspects of BC/DR in the cloud:
  85. Ensuring continuity and recovery within a given cloud provider.
  86. Preparing for and managing cloud provider outages.
  87. Considering options for portability, in case you need to migrate providers or platforms.
  88.  
  89. Module 3
  90. Information/Data Governance
  91. Ensuring the use of data and information complies with organizational policies, standards and strategy — including regulatory, contractual, and business objectives.
  92. Enterprise Risk Management
  93. Measuring, managing, and mitigating uncertainty. Rooted in providing value to stakeholders.
  94. Information Risk Management
  95. A subset of enterprise risk management, aligns risk management to the tolerance of the data owner.
  96. Service Level Agreement (SLA)
  97. A service level agreement is a commitment between a service provider and a client. Particular aspects of the service – quality, availability, responsibilities – are agreed between the service provider and the service user. The most common component of SLA is that the services should be provided to the customer as agreed upon in the contract.
  98. Compliance
  99. Validates awareness of and adherence to corporate obligations (e.g., corporate social responsibility, ethics, applicable laws, regulations, contracts, strategies and policies). The compliance process assesses the state of that awareness and adherence, further assessing the risks and potential costs of non-compliance against the costs to achieve compliance, and hence prioritize, fund, and initiate any corrective actions deemed necessary.
  100. Audit
  101. How we validate compliance, can be performed internally or externally using third parties.
  102. Compliance Inheritance
  103. If a cloud provider’s service is compliant with a regulation/standard, cloud consumers can build compliant services/applications using that service, but doesn’t guarantee compliance of end-service/application.
  104. Compliance Management
  105. A tool of governance; it is how an organization assesses, remediates, and proves it is meeting these internal and external obligations.
  106. Attestation
  107. A legal statement that may require an NDA before being released; a third-party audit firm determines legal compliance and creates this statement.
  108. Artifacts
  109. The logs, documentation, and other materials needed for audits and compliance; they are the evidence to support compliance activities. Both providers and customers have responsibilities for producing and managing their respective artifacts.
  110. Cloud Controls Matrix (CCM)
  111. A list of security controls mapped by domain and aligned to various regulatory frameworks.
  112. Consensus Assessments Initiative Questionnaire (CAIQ)
  113. A standard set of security questions for cloud providers, allowing cloud consumers to directly compare providers, and allowing providers to reduce the need to respond to non-standard RFPs.
  114. STAR Registry
  115. The CSA STAR Registry documents the security and privacy controls provided by popular cloud computing offerings. This publicly accessible registry is designed for users of cloud services to assess their cloud providers, security providers and advisory and assessment services firms in order to make the best procurement decisions.
  116. STARWatch
  117. STARWatch is a SaaS application to help organizations manage compliance with CSA STAR (Security, Trust and Assurance Registry) requirements. STARWatch delivers the content of the Cloud Controls Matrix (CCM) and Consensus Assessments Initiative Questionnaire (CAIQ) in a database format, enabling users to manage compliance of cloud services with CSA best practices.
  118.  
  119. Module 4
  120. Cloud Data Storage
  121. May look like traditional storage, but it is actually quite different:
  122. Volume: Virtual hard drives for virtual machines or instances.
  123. Object: Resilient file storage via API. “Database” for files.
  124. Database: Multitenant, includes relational and non-relational.
  125. Applications: May store files using a wide range of techniques the cloud consumer has no insight into.
  126. Cloud Access and Security Broker (CASB)
  127. Also known as security gateways, CASBs discover internal use of cloud services using various mechanisms such as network monitoring, integrating with an existing network gateway or monitoring tool, or even by monitoring DNS queries. After discovering which services your users are connecting to, most of these products then offer monitoring of activity on approved services through API connections (when available) or inline interception (man in the middle monitoring). Many support DLP and other security alerting and even offer controls to better manage use of sensitive data in cloud services (SaaS/PaaS/and IaaS).
  128. Data Loss Prevention (DLP)
  129. A way to monitor and protect data that your employees access via monitoring local systems, web, email, and other traffic. It is not typically used within data centers, and thus is more applicable to SaaS than PaaS or IaaS, where it is typically not deployed.
  130. URL Filtering
  131. While not as robust as CASB, a URL filter/web gateway may help you understand which cloud services your users are using (or trying to use).
  132. Data Dispersion (Bit Splitting)
  133. This process takes chunks of data, breaks them up, and then stores multiple copies on different physical storage to provide high durability. Data stored in this way is thus physically dispersed. A single file, for example, would not be located on a single hard drive.
  134. Access Controls
  135. One of the core data security controls across the various technologies, access controls should be implemented with a minimum of three layers: the management plane, public and internal sharing controls, and application-level controls.
  136. Entitlement Matrix
  137. Documents which users, groups, and roles should access which resources and functions, and what they can do with them.
  138. Encryption
  139. Protects data by applying a mathematical algorithm that “scrambles” the data, which then can only be recovered by running it through an unscrambling (decryption) process with a corresponding key. The result is a blob of ciphertext.
  140. Tokenization
  141. Takes the data and replaces it with a random value. It then stores the original and the randomized version in a secure database for later recovery.
  142. Encryption System
  143. There are three components of an encryption system: data (the information you’re encrypting), the encryption engine (what performs the mathematical process of encryption), and key management (handles the keys for the encryption). You always want to separate the encryption key from the data and the encryption engine whenever possible.
  144. Transparent Database Encryption (TDE)
  145. A technology employed by Microsoft, IBM and Oracle to encrypt database files. TDE offers encryption at file level. TDE solves the problem of protecting data at rest, encrypting databases both on the hard drive and consequently on backup media.
  146. Key Management
  147. How keys are handled for encryption. The main considerations are performance, accessibility, latency, and security. Cloud key management options include provider managed, 3rd-party/customer managed, customer key managed, and hardware security modules (HSM).
  148. Customer-Managed Keys
  149. Also known as Bring Your Own Key (BYOK), this allows a cloud customer to manage their own encryption key while the provider manages the encryption engine.
  150. Hardware Security Module (HSM)
  151. A physical computing device that safeguards and manages digital keys for strong authentication and provides cryptoprocessing. These modules traditionally come in the form of a plug-in card or an external device that attaches directly to a computer or network server.
  152. Enterprise Rights Management (ERM) / Digital Rights Management (DRM)
  153. Technologies that protect sensitive information from unauthorized access. Traditional DRM/ERM isn’t necessarily useful for cloud, but some SaaS/PaaS services may have “DRM-like” capabilities such as sharing or view controls that provide similar protections.
  154. Data Masking
  155. A method of creating a structurally similar but inauthentic version of an organization's data that can be used for purposes such as software testing and user training. The purpose is to protect the actual data while having a functional substitute for occasions when the real data is not required. Critical for test data generation and to ensure production data is not exposed in development environments.
  156. Data Security Lifecycle
  157. A tool to help model your security controls and see where data flows, how it can be used, and how it should be used. When you can do something that shouldn’t be allowed, that’s where you need to insert the control. Phases include Create, Store, Use, Share, Archive, and Destroy, but data will bounce between all the phases as it is used.
  158.  
  159. Module 5
  160. DevOps
  161. The deeper integration of development and operations teams through better collaboration and communications, with a heavy focus on automating application deployment and infrastructure operations. There are multiple definitions, but the overall idea consists of a culture, philosophy, processes, and tools.
  162. Secure Software Development Cycle (SSDLC)
  163. A structured process for ensuring security needs are met throughout application development processes.
  164. Continuous Integration and Continuous Delivery (CI/CD)
  165. CI/CD embodies a culture, set of operating principles, and collection of practices that enable application development teams to deliver code changes more frequently and reliably. The implementation is also known as the CI/CD pipeline and is one of the best practices for devops teams to implement.
  166. Standard Application Security Testing (SAST)
  167. On top of the normal range of tests, these should ideally incorporate checks on API calls to the cloud service. They should also look for any static embedded credentials for those API calls, which is a growing problem.
  168. Dynamic Application Security Testing (DAST)
  169. DAST tests running applications and includes tests such as web vulnerability testing and fuzzing. Due to the terms of service with the cloud provider, DAST may be limited and/or require pre-testing permission from the provider.
  170. Change Management (CM)
  171. In cloud, CM Includes not only the application, but infrastructure and the cloud management plane.
  172. Web Application Firewall (WAF)
  173. Typically protects web applications from attacks such as cross-site forgery, cross-site-scripting (XSS), file inclusion, and SQL injection, among others. A WAF usually is part of a suite of tools, which together can create a holistic defense against a range of attack vectors.
  174. Function as a Service (FaaS)
  175. A category of cloud computing services that provides a platform allowing customers to develop, run, and manage application functionalities without the complexity of building and maintaining the infrastructure typically associated with developing and launching an app.
  176. Secure Design & Development
  177. Consists of five phases: training, define, design, develop, and test.
  178. Threat Modeling
  179. Spoofing, tampering, repudiation, information disclosure, denial of service, elevation of privilege
  180. Functional Testing
  181. A type of software testing in which the system is tested against the functional requirements/specifications. Functions are tested by feeding them input and examining the output. Functional testing ensures that the requirements are properly satisfied by the application.
  182. Non-Functional Testing
  183. Non-functional testing is defined as a type of software testing to check non-functional aspects (performance, usability, reliability, etc) of a software application. It is designed to test the readiness of a system as per nonfunctional parameters which are never addressed by functional testing.
  184. Static Analysis
  185. Also called static code analysis, static analysis is a method of computer program debugging that is done by examining the code without executing the program. The process provides an understanding of the code structure, and can help to ensure that the code adheres to industry standards. In static analysis testing, you should have an understanding of cloud API calls.
  186. Dynamic Analysis (Fuzzing)
  187. Fuzz testing, or Fuzzing, is a Black Box software testing technique, which basically consists in finding implementation bugs using malformed/semi-malformed data injection in an automated fashion.
  188. Vulnerability Scanning
  189. An inspection of the potential points of exploit on a computer or network to identify security holes. A vulnerability scan detects and classifies system weaknesses in computers, networks and communications equipment and predicts the effectiveness of countermeasures.
  190. Vulnerability Assessment
  191. The process of defining, identifying, classifying and prioritizing vulnerabilities in computer systems, applications and network infrastructures and providing the organization doing the assessment with the necessary knowledge, awareness and risk background to understand the threats to its environment and react appropriately. New vulnerability analysis options, such as scanning in a deployment pipeline or using host-based agents, are often better used for cloud.
  192. Penetration Testing
  193. Also called pen testing or ethical hacking, is the practice of testing a computer system, network or web application to find security vulnerabilities that an attacker could exploit. CSA recommends adapting penetration testing for cloud using the following guidelines:
  194. Use a testing firm that has experience on the cloud provider where the application is deployed.
  195. Include developers and cloud administrators within the scope of the test. Many cloud breaches attack those who maintain the cloud, not the application on the cloud itself. This includes the cloud management plane.
  196. If the application is a multitenant app, then allow the penetration testers authorized access as a tenant to see if they can compromise the tenancy isolation and use their access to break into another tenant’s environment or data.
  197. Unit Testing
  198. A level of software testing where individual units/ components of a software are tested. The purpose is to validate that each unit of the software performs as designed. A unit is the smallest testable part of any software. It usually has one or a few inputs and usually a single output.
  199. Regression Testing
  200. The process of testing changes to computer programs to make sure that the older programming still works with the new changes. Regression testing is a normal part of the program development process and, in larger companies, is done by code testing specialists.
  201. Version Control Repository
  202. Single source for your code, logs changes and merges additions. Ex: GitHub is a version control repository.
  203. Configuration Integration Server
  204. Monitors your version control repository and when it detects changes in the files, it can launch a series of tests that includes building an entire environment before building your code, and then build the artifacts and compile app code and run testing.
  205. Realtime Application Security Protection (RASP)
  206. Realtime application security protection is a security technology that uses runtime instrumentation to detect and block computer attacks by taking advantage of information from inside the running software.
  207. Software Defined Security (SDS)
  208. Security automated with APIs and code. A type of security model in which the information security in a computing environment is implemented, controlled and managed by security software. SDS is a software-managed, policy-driven and governed security where most of the security controls such as intrusion detection, network segmentation and access controls are automated and monitored through software.
  209. Event-Driven Security
  210. Events in the cloud trigger execution of security code. Certain cloud providers support event-driven code execution. In these cases, the management plane detects various activities—such as a file being uploaded to a designated object storage location or a configuration change to the network or identity management—which can in turn trigger code execution through a notification message, or via serverless hosted code. Security can define events for security actions and use the event-driven capabilities to trigger automated notification, assessment, remediation, or other security processes.
  211. Air Gap
  212. An air gap is a network security measure employed on one or more computers to ensure that a secure computer network is physically isolated from unsecured networks, such as the public Internet or an unsecured local area network. It means a computer or network that is electrically disconnected (with a conceptual air gap) from all other networks.
  213. Identity and Access Management (IAM)
  214. IAM, at its core, is concerned with mapping some form of an entity (a person, system, piece of code, etc.) to a verifiable identity associated with various attributes (which can change based on current circumstances), and then making a decision on what they can or can’t do based on entitlements.
  215. Entity
  216. Discrete types that will have Identity; these are to Users, Devices, Code, Organizations and Agents.
  217. Identity
  218. The unique expression of an entity within a given namespace.
  219. Identifier
  220. The means by which an Identity can asserted, usually using crypto tokens for digital identities.
  221. Attributes
  222. Facets of an identity (e.g., org. unit or IP address).
  223. Persona
  224. Expression of an identity with attributes that indicates context. E.g., a developer logged into a given project.
  225. Role
  226. Has multiple meanings. Typically used to indicate a persona or subset. E.g., “developer” vs. “admin”.
  227. Authentication
  228. The process of confirming an identity. Authn
  229. Multifactor Authentication
  230. Use of multiple factors in authentication (e.g., username + password + token).
  231. Access Control
  232. Restricting access to a resource, Access management is the corresponding process.
  233. Authoritative Source
  234. The "root" source for an identity, such as a directory server.
  235. Authorization
  236. Allowing an identity access. Authz
  237. Entitlement
  238. Mapping an identity to an authorization.
  239. Federated Identity Management
  240. The process of asserting an identity across different systems.
  241. Identity Provider
  242. The trusted source of the identity in federation.
  243. Relying Party
  244. The system that relies on an identity assertion from an identity provider.
  245. Security Assertion Markup Language (SAML)
  246. Security Assertion Markup Language (SAML) is a standard protocol for web browser Single Sign-On (SSO) using secure tokens. SAML completely eliminates all passwords and instead uses standard cryptography and digital signatures to pass a secure sign-in token from an identity provider to a SaaS application. Supported by nearly all cloud providers, it is the most common way of communicating authentication and authorization in two parties in federated identity.
  247. Open Authorization (OAuth)
  248. OAuth is an open standard for access delegation, commonly used as a way for Internet users to grant websites or applications access to their information on other websites but without giving them the passwords. This mechanism is used by companies such as Amazon, Google, Facebook, Microsoft and Twitter to permit the users to share information about their accounts with third party applications or websites.
  249. OAuth 2.0
  250. OAuth 2.0 provides specific authorization flows for web applications, desktop applications, mobile phones, and smart devices. The specification and associated RFCs are developed by the IETF OAuth WG; the main framework was published in October 2012.
  251. OpenID
  252. OpenID is an open standard and decentralized authentication protocol. Promoted by the non-profit OpenID Foundation, it allows users to be authenticated by co-operating sites (known as relying parties, or RP) using a third-party service, eliminating the need for webmasters to provide their own ad hoc login systems, and allowing users to log into multiple unrelated websites without having to have a separate identity and password for each.
  253. eXtensible Access Control Markup Language (XACML)
  254. Not as common a form of identity management. The standard defines a declarative fine-grained, attribute-based access control policy language, an architecture, and a processing model describing how to evaluate access requests according to the rules defined in policies.
  255. System for Cross-Domain Identity Management (SCIM)
  256. Not as common a form of identity management. SCIM is a standard for automating the exchange of user identity information between identity domains, or IT systems.
  257. “Free Form” Model
  258. Internal identity providers/sources (often directory servers) connect directly to cloud providers.
  259. “Hub & Spoke” Model
  260. Internal identity providers/sources communicate with a central broker or repository that then serves as the identity provider for federation to cloud providers.
  261. Identity Providers
  262. Identity providers don’t need to be located only on-premises; many cloud providers now support cloud-based directory servers that support federation internally and with other cloud services.
  263. Identity Brokers
  264. Identity brokers handle federating between identity providers and relying parties (which may not always be a cloud service). They can be located on the network edge or even in the cloud in order to enable web-SSO.
  265. Incident Response
  266. An organized approach to addressing and managing the aftermath of a security breach or cyberattack, also known as an IT incident, computer incident or security incident. The goal is to handle the situation in a way that limits damage and reduces recovery time and costs.
  267. Provisioning
  268. Provisioning is the process of coordinating the creation of user accounts, e-mail authorizations in the form of rules and roles, and other tasks such as provisioning of physical resources associated with enabling new users.
  269. Deprovisioning
  270. Deprovisioning involves deactivating user accounts, email authorizations, and other tasks such as deprovisioning of physical resources associated with disabling new users.
  271. Role-Based Access Controls (RBAC)
  272. RBAC is an approach to restricting system access to authorized users. It is used by the majority of enterprises with more than 500 employees, and can implement mandatory access control (MAC) or discretionary access control (DAC).
  273. Attribute-Based Access Controls (ABAC)
  274. Also known as policy-based access control, defines an access control paradigm whereby access rights are granted to users through the use of policies which combine attributes together. The policies can use any type of attributes (user attributes, resource attributes, object, environment attributes, etc.). This is the best model for cloud when comparing to RBAC, as it is far more granular and flexible.
  275.  
  276. Module 6
  277. Security as a Service (SecaaS)
  278. SecaaS providers secure systems and data in the cloud as well as hybrid and traditional enterprise networks via cloud-based services. This includes dedicated SecaaS providers, as well as packaged security features from general cloud-computing providers. Security as a Service encompasses a very wide range of possible technologies, but they must meet the following criteria:
  279. SecaaS includes security products or services that are delivered as a cloud service.
  280. To be considered SecaaS, the services must still meet the essential NIST characteristics for cloud computing, as defined in Domain 1.
  281. Autoscaling
  282. Autoscaling is a method used in cloud computing, whereby the amount of computational resources in a server farm, typically measured in terms of the number of active servers, scales automatically based on the load on the farm. It is closely related to, and builds upon, the idea of load balancing.
  283. Patch Management
  284. Patch management is a strategy for managing patches or upgrades for software applications and technologies. A patch management plan can help a business or organization handle these changes efficiently.
  285. Web Security Gateways
  286. Web security gateways involves real-time protection, offered either on-premises through software and/or appliance installation, or via the Cloud by proxying or redirecting web traffic to the cloud provider (or a hybrid of both).
  287. Security Information and Event Management (SIEM)
  288. SIEM systems aggregate (via push or pull mechanisms) log and event data from virtual and real networks, applications, and systems. This information is then correlated and analyzed to provide real-time reporting on and alerting of information or events that may require intervention or other types of responses. Cloud SIEMs collect this data in a cloud service, as opposed to a customer-managed, on-premises system.
  289. Intrusion Detection/Prevention (IDS/IPS)
  290. IDS/IPS systems monitor behavior patterns using rule-based, heuristic, or behavioral models to detect anomalies in activity which might present risks to the enterprise. With IDS/IPS as a service, the information feeds into a service-provider’s managed platform, as opposed to the customer being responsible for analyzing events themselves.
  291. Big Data
  292. Big data includes a collection of technologies for working with extremely large datasets that traditional data-processing tools are unable to manage. It’s not any single technology but rather refers commonly to distributed collection, storage, and data-processing frameworks.
  293. High Volume
  294. A large size of data, in terms of number of records or attributes.
  295. High Velocity
  296. Fast generation and processing of data, i.e., real-time or stream data.
  297. High Variety
  298. Structured, semi-structured, or unstructured data.
  299. Distributed Data Collection
  300. Mechanisms to ingest large volumes of data, often of a streaming nature. This could be as “lightweight” as web-click streaming analytics and as complex as highly distributed scientific imaging or sensor data. Not all big data relies on distributed or streaming data collection, but it is a core big data technology.
  301. Distributed Storage
  302. The ability to store the large data sets in distributed file systems (such as Google File System, Hadoop Distributed File System, etc.) or databases (often NoSQL), which is often required due to the limitations of non-distributed storage technologies.
  303. Distributed Processing
  304. Tools capable of distributing processing jobs (such as mapreduce, spark, etc.) for the effective analysis of data sets so massive and rapidly changing that single origin processing can’t effectively handle them.
  305. Internet of Things
  306. Internet of Things is a blanket term for non-traditional computing devices used in the physical world that utilize Internet connectivity. It includes everything from Internet-enabled operational technology (used by utilities like power and water) to fitness trackers, connected light bulbs, medical devices, and beyond.
  307.  
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement