
Data Structure
Networking
RDBMS
Operating System
Java
MS Excel
iOS
HTML
CSS
Android
Python
C Programming
C++
C#
MongoDB
MySQL
Javascript
PHP
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
Found 6705 Articles for Database

201 Views
Online information is vulnerable to third parties' abuse, such as identity theft, fraud, and phishing schemes. The internet has provided countless new avenues for commerce and communication, but it has also made it simpler for identity thieves to target their victims. Thus, it is important to protect the sensitive data that many businesses, nonprofits, and governments have on file, such as loyalty program information, customer data, data collection, transaction details, and employee information. Several rules and regulations, like the General Data Protection Regulation and the Privacy Shield, have been adopted in various regions of the world to ensure this is ... Read More

2K+ Views
The capacity to retain data is rapidly emerging as one of the most crucial features of contemporary business, government, and even personal life. Most successful companies have data storage systems that are properly organized, secure, and easy to access when necessary. Accounting records, policy documents, and human resources information, to mention a few examples, must be kept in a safe system that provides security against data loss and theft and has a reliable recovery method in place. Saving space and money through effective data storage is preferable to maintaining data in files or on a computer. Centralized data storage ... Read More

6K+ Views
Knowledge-based agents represent searchable knowledge that can be reasoned. These agents maintain an internal state of knowledge, take decisions regarding it, update the data, and perform actions on this data based on the decision. Basically, they are intelligent and respond to stimuli like how humans react to different situations. Examples − Based on the user's question (that behaves as the external stimuli), they provide an answer from their knowledge base (the data warehouse where they store basic knowledge) that provides a satisfactory answer to the user’s question. Knowledge Base Features It has the below-mentioned features − Knowledge base (KB) It is ... Read More

2K+ Views
Most of the time when we use JPA queries, the result obtained is mapped to an object/particular data type. But When we use aggregate function in queries, handling the result sometimes require us to customize our JPA query. Let’s understand this with help of an example (Department, Employee) − Dept.java @Entity public class Dept { @Id private Long id; private String name; @OneToMany(mappedBy = "dep") private List emp; //Getters //Setters } A department can have one or more ... Read More

10K+ Views
In this article, we will see how we can connect to MySQL database using an ORM (object relational mapping) framework like hibernate. First of all, we need to add maven dependency for hibernate in our pom.xml file − org.hibernate hibernate-core 5.6.2.Final Now, let us define an entity class that will be mapped to a database table using hibernate. @Entity @Table( name = " Employee") public class Employee { @Id @GeneratedValue(strategy = GenerationType.AUTO) Long id; @Column(name = ... Read More

2K+ Views
Caching helps to reduce database network call, for executing queries. First level cache is linked with a session. It is implemented implicitly. First level cache exist only till the session object is there. Once session object is terminated/closed, there will be no cache objects. Second level cache works across multiple sessions objects. It is linked with a session factory. Second level cache objects are available to all the session in a single session factory. These cache objects are terminated when that particular session factory is closed. Implementing second level caching We need to add the following dependencies in order to ... Read More

4K+ Views
Bucketing is a method in Hive which is used for organizing the data. It is a concept of separating data into ranges known as buckets. Bucketing in hives comes helpful when the use of partitioning becomes hard. A user can determine the range of a specific bucket by the hash value. Partitioned tables can be bucketed to separate the data further to perform queries more efficiently. Every bucket is stored as a file within the table or the partition’s directories on HDFS. The records having a similar value within a column are always stored in the same bucket. Bucketing can ... Read More

784 Views
The full name of the RDD is a distributed database. Spark performance is based on this ambiguous set, enabling it to consistently cope with major data processing conditions, including MapReduce, streaming, SQL, machine learning, graphs, etc. Spark supports many programming languages, including Scala, Python, and R. RDD also supports the maintenance of material in these languages. How to create RDD Spark supports RDDS architecture in many areas, including local file systems, HDFS file systems, memory, and HBase. For the local file system, we can create RDD through the following way − val distFile = sc.textFile("file:///user/root/rddData.txt") By default, Spark takes ... Read More

305 Views
Data was previously stored in relational data management systems when Hadoop and big data concepts were not available. After introducing Big Data concepts, it was essential to store the data more concisely and efficiently. However all data stored in the related data management system needs to be transferred to the Hadoop archive. With Sqoop, we can transfer this amount of personal data. Sqoop transfers data from a related database management system to a Hadoop server. Thus, it facilitates the transfer of large volumes of data from one source to another. Here are the basic features of Sqoop − Sqoop ... Read More

7K+ Views
Apache Hadoop is a data file system, but to perform data processing, we need an SQL, such as a language that can change data or make complex data conversions according to our requirements. Apache PIG can achieve this data manipulation. An advanced writing language like SQL is used with Hadoop to create the Pig. Pig Data types work with formal and informal data and are translated into a Map Reduce number processed in the Hadoop collection. We must know about Pig Data Types before understanding operators in Pig. Any data uploaded to a pig has a specific structure and schema ... Read More