MongoDB with Python: a quick introduction (I)



Here are some basic steps for data manipulation in MongoDB using Python.

Download pymongo
pymongo is a native Python driver for MongoDB.
The PyMongo distribution contains tools for working with MongoDB.

(1) Installing PyMongo is very simple if you have setuptools installed. To install setuptools you need to:
(a) Download the egg file for your version of python: get it here.
(b) After downloaded, execute the egg as if it were an actual shell scipt:
$ sudo sh setuptools-0.6c11-py2.6.egg

(2) With setuptools installed, you can install pymongo using:
$ sudo easy_install pymongo
Searching for pymongo
Best match: pymongo 2.0.1
Processing pymongo-2.0.1-py2.6-linux-i686.egg
pymongo 2.0.1 is already the active version in easy-install.pth

Using /usr/local/lib/python2.6/dist-packages/pymongo-2.0.1-py2.6-linux-i686.egg
Processing dependencies for pymongo
Finished processing dependencies for pymongo

(b) or you can Install from source
$ git clone git://github.com/mongodb/mongo-python-driver.git pymongo
$ cd pymongo/
$ python setup.py install

To test whether the installation was successful, try to import pymongo package into python without raising an exception:
jdoe@lambda:$ python
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) 
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 
>>> import pymongo
>>>

Connect to the MongoDB server and check that you're connected to the local host in the default port.
>>> from pymongo import Connection
>>> connection = Connection()            -- create a connection with the default server/port
>>> connection                           -- print connection details
Connection('localhost', 27017')

-- You can explicitly specify host and tcp port where the mongoDB service you want to connect is running.  
>>> connection = Connection('192.117.47.23', 20120)
>>>

Connect to a database
Once connected to the database server, you need to connect to a specific mongodb database.
>>> connection.database_names()       --- list the available databases in the server
[u'mynewdb', u'local', u'test']
>>>
>>> db = connection['mynewdb']        --- connects to 'mynewdb'
>>>  
>>> db.name                           --- list name of database you're connected to
u'mynewdb'
>>>

Access database collections
Collections can be thought as analogous to tables in relational databases. To see existing collections in the database:
>>> db.collection_names()           --- list existing collections
[u'mycollection', u'system.indexes', u'things', u'comments']
>>>
>>> things = db['things']
>>>
>>> things.name                     --- print collection name
u'things'
>>>
>>> things.database                 --- database that holds the collection
Database(Connection('localhost', 27017), u'mynewdb')
>>>
>>> things.count()                  --- get the number of existing documents in the collection
5


  • Manipulating data in MongoDB with CRUD operations: Create, Retrieve, Update, Delete
  • These are the atomic operations used to manipulate the data.
  • These are method calls equivalent to DML statments in relational databases (Insert, Select, Update, Delete).
  • Comparing data manipulating operations in a relational table and in a MongoDB collection:
Relational Database MongoDB
Table BLOG (author, post, tags, date) Collection BLOG (Columns not statically defined)
INSERT statement
SQL> INSERT into BLOG
Values ("joe", v_post, "MongoDB, Python", sysdate)
>>> post = { "author": "joe",
        "text": "Blogging about MongoDB",
        "tags": ["MongoDB", "Python"],
        "date": datetime.datetime.utcnow()}
>>> db.blog.insert(post)
SELECT statement
SQL> SELECT * from BLOG
Where author = "joe"
>>> db.blog.find({"Author": "joe"})
UPDATE statement
SQL> Update BLOG set tags = "MongoDB, Python"
     where author = "joe"
>>> db.blog.update({"author":"joe"},
        { "$set": ["MongoDB", "Python"]})
DELETE statement
SQL> DELETE from BLOG where author = "joe"
>>> db.blog.remove({"author":"joe"})

Creating a new collection
Databases and Collections in MongoDB are created only when the first data is inserted.
$ ipython
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) 
Type "copyright", "credits" or "license" for more information.
...
In [2]: import pymongo                  --- import pymongo package
In [3]: from pymongo import Connection
In [4]: from bson import ObjectId

In [5]: connection = Connection()
In [6]: connection
Out[6]: Connection('localhost', 27017)  --- connected to localhost, in the default TCP port
In [7]: connection.database_names()     --- list existing databases
Out[7]: [u'test', u'local']

In [8]: db = connection['blogdb']       --- connect to a new database. 
                                        --- It will be created when the first object is inserted.

In [9]: post = { "author": "John", 
...          "text": "Blogging about MongoDB"};

In [9]: db.posts.insert(post);                      --- The first insert creates the new collection 'posts'
Out[9]: ObjectId('...')
In [10]: db.collection_names()
[u'system.indexes', u'posts']


Note: Collections can also be organized in namespaces, defined using a dot notation. For example, you could create two collections named: book.info and book.authors.

Inserting a document in a collection
  • In MongoDB documents within a collection do not have all to have the same number and type of fields ("columns"). In other words, schemas in MongoDB are dynamic, and can vary within one collection.
  • PyMongo uses dictionary objects to represent JSON-style documents.
  • To add a new document to a collection, using ipython:
In [9]: post = { 
   ...:     'author': 'Joann',
   ...:     'text': 'Just finished reading the Book of Nights'}

In [10]: db.posts.insert(post)        --- Method call to create a new document (post)
Out[10]: ObjectId('4eb99ad5a9e15833b1000000')

In [17]: for post in db.posts.find():   --- listing all documents in the posts collection.
             post
   ....:     
   ....:     
Out[18]: 
{u'_id': ObjectId('4eb99ad5a9e15833b1000000'),
 u'author': u'Joann',
 u'text': u'Just finished reading the Book of Nights'}
  • Note that you don't need to specify the "_id" field when inserting a new document into a collection.
  • The document identifier is automatically generated by the database and is unique across the collection.
  • You can also execute bulk inserts:
In [13]: many_posts = [{'author': 'David',
   ....:                'text' : "David's Blog"},
   ....:               {'author': 'Monique',
   ....:                'text' : 'My photo blog'}]

In [14]: db.posts.insert(many_posts)
Out[14]: [ObjectId('4eb9bcada9e15809f3000000'), ObjectId('4eb9bcada9e15809f3000001')]

In [15]: for post in db.posts.find():
   ....:     post
   ....:     
   ....:     
Out[15]: 
{u'_id': ObjectId('4eb99ad5a9e15833b1000000'),
 u'author': u'Joann',
 u'text': u'Just finished reading the Book of Nights'}
Out[15]: 
{u'_id': ObjectId('4eb9bcada9e15809f3000000'),
 u'author': u'David',
 u'text': u"David's Blog"}
Out[15]: 
{u'_id': ObjectId('4eb9bcada9e15809f3000001'),
 u'author': u'Monique',
 u'text': u'My photo blog'}

Selecting (reading) documents inside collections
  • Data in MongoDB is represented by structures of key-value pairs, using JSON-style documents.
  • Let's query the collection "things" and ask for ONE document in that collection. Use the find_one() method.
>>> things.find_one()                             --- returns the first document in the collection
{u'_id': ObjectId('4eb787821b02fd09c403b219'), u'name': u'mongo'}

Here it returned a document containing two fields (key-value pairs): 
  "_id": ObjectId('4eb787821b02fd09c403b219')  --- (an identifier for the document), and 
  "name": 'mongo'                              --- a "column" "name" with its associated value, the string 'mongo'.

We can also define criteria for the query. For example,
(a) return one document with field "name" equal to "mongo"
>>> things.find_one({"name":"mongo"});
{u'_id': ObjectId('4eb787821b02fd09c403b219'), u'name': u'mongo'}
>>>

(b) return one document with field "name" equal to "book"
>>> things.find_one({"name":"book"});
{u'keywords': [u'NoSQL', u'MongoDB', u'PyMongo'], u'date': datetime.datetime(2011, 11, 7, 19, 47, 44, 722000), u'_id': ObjectId(',,,'), u'name': u'book', u'title': u'Mastering MongoDB'}

Note: The dynamic nature of the mongoDB database schemas can be seen in the results of the queries above. Here the collection "things" has two documents with different number of fields ("columns") and datatypes: 
 {"name":"mongo"}
 {"name":"book, "title": "Mastering MongoDB", "Keywords":["NoSQL", "MongoDB", "PyMongo"], "date": datetime.datetime(2011, 11, 7, 19, 47, 44, 722000)} 

Querying more than one document
A query returns a cursor pointing to all the documents that matched the query criteria.
To see these documents you need to iteract through the cursor elements:
>>> for thing in things.find():
...     thing
... 
{u'_id': ObjectId('...'), u'name': u'mongo'}
{u'x': 4.0, u'_id': ObjectId('...'), u'j': 1.0}
{u'x': 4.0, u'_id': ObjectId('...'), u'j': 2.0}
{u'x': 4.0, u'_id': ObjectId('...'), u'j': 3.0}
{u'x': 4.0, u'_id': ObjectId('...'), u'j': 4.0}
{u'keywords': [u'NoSQL', u'MongoDB', u'PyMongo'], u'date': datetime.datetime(...), u'_id': ObjectId('...'), u'name': u'book', u'title': u'Mastering MongoDB'}
{u'keywords': [u'programming', u'Python', u'MongoDB'], u'date': datetime.datetime(...), u'_id': ObjectId('...'), u'name': u'book', u'title': u'Python and MongoDB'}
{u'name': u'book', u'title': u'Python Notes', u'keywords': [u'programming', u'Python'], u'year': 2011, u'date': datetime.datetime(...), u'_id': ObjectId('4...')}

-- Alternatively, you can explicitly define a cursor variable: 
>>> cursor = things.find()
>>> for x in cursor:
...     x
... 
{u'x': 4.0, u'_id': ObjectId('...'), u'j': 1.0}
{u'x': 4.0, u'_id': ObjectId('...'), u'j': 2.0}
{u'x': 4.0, u'_id': ObjectId('...'), u'j': 3.0}
{u'x': 4.0, u'_id': ObjectId('...'), u'j': 4.0}
{u'keywords': [u'NoSQL', u'MongoDB', u'PyMongo'], u'date': datetime.datetime(...), u'_id': ObjectId('...'), u'name': u'book', u'title': u'Mastering MongoDB'}
{u'keywords': [u'programming', u'Python', u'MongoDB'], u'date': datetime.datetime(...), u'_id': ObjectId('...'), u'name': u'book', u'title': u'Python and MongoDB'}
{u'name': u'book', u'title': u'Python Notes', u'keywords': [u'programming', u'Python'], u'year': 2011, u'date': datetime.datetime(...), u'_id': ObjectId('...')}
>>> 


You can also return only some of the document fields. (Similar to a SQL query that returns only a subset of the table columns).
>>> for thing in things.find({"name":"book"}, {"keywords": 1}):
...     thing
... 
{u'keywords': [u'NoSQL', u'MongoDB', u'PyMongo'], u'_id': ObjectId('...')}
{u'keywords': [u'programming', u'Python', u'MongoDB'], u'_id': ObjectId('...')}
{u'keywords': [u'programming', u'Python'], u'_id': ObjectId('...')}
>>> 

Updating documents in collections
  • MongoDB supports atomic updates in document fields as well as more traditional updates for replacing an entire document.
  • Use the update() method to entirely replace the document matching criteria with a new document.
  • If you want to modify only some attributes of a document, you need to use one of the $set modifier.
  • update() usually takes two parameters:
    • the first select the documents that will be updated (similar to the WHERE clause on SQL);
    • the second parameter contains the new values for the document attributes.


Example: insert a new document in the blog collection, and update the tag values.
(1) Insert a new document in the blog collection

>>>new_post = { "author": "Monique", 
...       "text": "Sharding in MongoDB",
...       "tags": ["MongoDB"],
...       "date": datetime.datetime.utcnow()};
>>>
>>> db.blog.insert(new_post)
ObjectId('...')
>>> 

(2) list documents in the collection
>>> for post in db.blog.find():
...     post
... 
{u'date': datetime.datetime(2011, 11, 7, 22, 10, 43, 77000), u'text': u'Blogging about MongoDB', u'_id': ObjectId('...'), u'author': u'John', u'tags': [u'MongoDB', u'NoSQL', u'Python']}
{u'date': datetime.datetime(2011, 11, 8, 1, 5, 32, 604000), u'text': u'Sharding in MongoDB', u'_id': ObjectId('...'), u'author': u'Monique', u'tags': [u'MongoDB']}
>>> 

Now, update the post where the author was Monique.
(1) substitute the document for an entirely new document
>>> db.blog.update({"author":"Monique"}, { "author": "Monique", "text": "Sharding in MongoDB", "tags": ["MongoDB", "scalability"], "date": datetime.datetime.utcnow()});
>>> for post in db.blog.find():
...     post
... 
{u'date': datetime.datetime(2011, 11, 7, 22, 10, 43, 77000), u'text': u'Blogging about MongoDB', u'_id': ObjectId('...'), u'author': u'John', u'tags': [u'MongoDB', u'NoSQL', u'Python']}
{u'date': datetime.datetime(2011, 11, 8, 1, 8, 43, 416000), u'text': u'Sharding in MongoDB', u'_id': ObjectId('...'), u'author': u'Monique', u'tags': [u'MongoDB', u'scalability']}
>>> 

Note that the previous update replaced the previous document entirely, even if all you needed to do was to add one new tag to the tags field of the document. If you call the update method and pass only the new values for the tags attribute, the resulting update will be incorrect:
>>> db.blog.update({"author":"Monique"}, { "tags": ["MongoDB", "scalability"]});
>>>
>>> for post in db.blog.find():
...     post
... 
{u'date': datetime.datetime(...), u'text': u'Blogging about MongoDB', u'_id': ObjectId('...'), u'author': u'John', u'tags': [u'MongoDB', u'NoSQL', u'Python']}
{u'_id': ObjectId('...'), u'tags': [u'MongoDB']}                  --- updated document
>>> 

(2) Another way to update only some fields of a document, is to use the $set update modifier.
  • The $set modifier works like the SET clause on an SQL Update statement, with which you can specify the columns that will be updated
>>> db.blog.update({"author":"Monique"}, { "$set": {"tags": ["MongoDB","Scalability"]}});
>>>
>>> for post in db.blog.find():
...     post
... 
{u'date': datetime.datetime(...), u'text': u'Blogging about MongoDB', u'_id': ObjectId('...'), u'author': u'John', u'tags': [u'MongoDB', u'NoSQL', u'Python']}
{u'date': datetime.datetime(...), u'text': u'Sharding in MongoDB', u'_id': ObjectId('...'), u'tags': [u'MongoDB', u'Scalability'], u'author': u'Monique'}
>>> 

(3) Since the "tags"field is an array, you can more efficiently use the $push update modifier.
  • $push appends value to field, if field is an existing array, otherwise sets field to the array [value] if field is not present.
>>> db.blog.update({"author":"Monique"}, { "$push": {"tags":"Python"}});
>>> for post in db.blog.find():
...     post
... 
{u'date': datetime.datetime(2011, 11, 7, 22, 10, 43, 77000), u'text': u'Blogging about MongoDB', u'_id': ObjectId('4eb857b3a9e158609c000004'), u'author': u'John', u'tags': [u'MongoDB', u'NoSQL', u'Python']}
{u'date': datetime.datetime(2011, 11, 8, 1, 5, 32, 604000), u'text': u'Sharding in MongoDB', u'_id': ObjectId('4eb88081a9e158609c000005'), u'tags': [u'MongoDB', u'Scalability', u'Python'], u'author': u'Monique'}
>>> 


Deleting documents from collections
To delete a document from a collection use the method remove, passing as parameter a document field that either (a) uniquely identifies the document you want to delete or (b) identifies the set of documents you want to delete.s
>>> db.blog.remove({"author":"Monique"})
>>> for post in db.blog.find():
...     post
... 
{u'date': datetime.datetime(2011, 11, 7, 22, 10, 43, 77000), u'text': u'Blogging about MongoDB', u'_id': ObjectId('4eb857b3a9e158609c000004'), u'author': u'John', u'tags': [u'MongoDB', u'NoSQL', u'Python']}
>>>


Installing MongoDB on Ubuntu


Recently I have been trying MongoDB and other NoSQL databases for a project.
So I decided to compile some of my notes on posts here.

MongoDB: Download, Install and Configuration

Installing MongoDB on Ubuntu
To install MongoDB on Ubuntu, you can use the packages made available by 10gen, following the steps below:

(1) add a line to your /etc/apt/sources.list
deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen

(2) Add the 10gen GPG key, or apt will disable the repository (apt uses encryption keys to verify if the repository is trusted and disables untrusted ones).
jdoe@quark:/etc/apt$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10
Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv 7F0CEB10
gpg: requesting key 7F0CEB10 from hkp server keyserver.ubuntu.com
gpg: key 7F0CEB10: public key "Richard Kreuter " imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
jdoe@quark:/etc/apt$ 

(3) To install the package, update the sources and then install:
$ sudo apt-get update
$ sudo apt-get install mongodb-10gen

(4) Create directory for datafiles and database logs.
MongoDB by default try to store datafiles in /data/db.
If the directory does not exist, the server will fail to start unless you explicitly assign a different, existing location for the datafiles.
For security reasons, make sure the directory is created as a non-root user.
(a) You can create the default directory: 
$ sudo mkdir -p  /data/db/
S sudo chown `id -u` /data/db

(b) you can choose to store datafiles somewhere else. If you choose this, make sure to specify the datafile location with the --dbpath option when starting the MongoDB server.
$ sudo mkdir -p  
S sudo chown `id -u` 

(5) You can test the installation, by calling the mongodb shell
jdoe@quark:~$ mongo
MongoDB shell version: 2.0.1
connecting to: test
> 

The installation creates the mongodb user and install files according to the default configuration below:
Installed architecture
  • binaries installed on /usr/bin
jdoe@quark:/usr/bin$ ls -l mongo*
-rwx... mongo            -- database shell
-rwx... mongod           -- mongodb daemon. This is the core database process
-rwx... mongodump        -- hotbackups. creates a binary representation of the entire database, collections or collection objects
-rwx... mongoexport      -- exports a collection to JSON or CSV
-rwx... mongofiles       -- tool for using GridFS, a mechanism for manipulating large files in MongoDB
-rwx... mongoimport      -- imports a JSON/CSV/TSV file into a MongoDB
-rwx... mongorestore     -- restores the output of mongodump
-rwx... mongos           -- sharding controller. Provides automatic load balancing and partitioning
-rwx... mongostat        -- show usage statistics (numbers and percentuals) on a running monodb instance 
-rwx... mongotop         -- provide read/write statistics on collections and namespaces in a mongodb instance
  • Configuration file installed on /etc/mongodb.conf
  • database files will be created in: dbpath=/var/lib/mongodb
  • log files will be created in : logpath=/var/log/mongodb/mongodb.log



Starting up and Stopping MongoDB
  • mongod is MongoDB core database process. It can be manually started to run in the foreground or as a daemon.
  • MongoDB is a database server: it runs in the foreground or background and waits for connections from the user.
  • There are a number of options with which mongod can be initialized.
  • The startup options fall into general, replication, master/slave, replica set and sharding categories. Some of the startup options are:
--port      num   TCP port which mongodb will use
--maxConns  num   max # of simultaneous connections
--logpath   path  log file path
--logappend       append instead of overwrite log file
--fork            fork server process (daemon)
--auth            authenticate users
--dbpath    path  directory for datafiles
--directoryperdb  each database will be stored in its own directory
--shutdown        shutdowns server

(a) start mongodb running in the foreground in a terminal. Data stored in /mongodb/data. mongodb uses default port 27017.
(You need to create the /mongodb/data first).
jdoe@quark:~$ mkdir -p /mongodb/data
jdoe@quark:~$ mongod --dbpath /mongodb/data  
...
Sun Nov  6 19:05:09 [initandlisten] options: { dbpath: "/mongodb/data" }
Sun Nov  6 19:05:09 [websvr] admin web console waiting for connections on port 28017
Sun Nov  6 19:05:09 [initandlisten] waiting for connections on port 27017
...

jdoe@quark:~$ ps -ef | grep mongo
jdoe   20519 16142  0 19:05 pts/1    00:00:10 mongod --dbpath /mongodb/data
jdoe   20566 20034  0 19:49 pts/2    00:00:00 grep mongo

jdoe@quark:~$ ls -l /mongodb/data
total 4
-rwxr-xr-x 1 jdoe jdoe 6 2011-11-06 19:05 mongod.lock

(b) start mongodb as a daemon, running on TCP port 20012. Data stored in /mongodb/data. Logs on /mongodb/log.
jdoe@quark:~$ mongod --fork --port 20012 --dbpath /mongodb/data/ --logpath /mongodb/logs/mongodblog --logappend 
forked process: 20655
jdoe@quark:~$ all output going to: /mongodb/logs/mongodblog

Alternatively, you can start/stop mongoDB by:
jdoe@quark:~$ sudo start mongodb
mongodb start/running, process 2824

jdoe@quark:~$ sudo stop mongodb
mongodb stop/waiting

Stopping MongoDB

  1. Contro-C will do it, if the server is running on the foreground. Mongo waits until all ongoing operations complete and then exits.
  2. Alternatively:
(a) call mongod with --shutdown option
jdoe@quark:~$ mongod --dbpath /mongodb/data --shutdown
killing process with pid: 20746
or
(b) use database shell (mongo)
(Here note the confusing output of the db.shutdownServer() call. Although the messages suggest failure, the database is shutdown as expected).
jdoe@quark:~$ mongo quark:20012
MongoDB shell version: 2.0.1
connecting to: quark:20012/test
> use admin
switched to db admin
> db.shutdownServer()
Sun Nov  6 20:23:59 DBClientCursor::init call() failed
Sun Nov  6 20:23:59 query failed : admin.$cmd { shutdown: 1.0 } to: quark:20012
server should be down...

Oracle Flashback Technology (III) - Flashback Data Archive



Oracle Flashback technology
Logical Flashback (do not depend on RMAN, rely on undo data)
Physical flashback
New on Oracle 11g:



Using Flashback Data Archive (Oracle Total Recall)
  • With Data Archive you can store and track transactional changes to a record over its lifetime.
  • It permanently stores undo information on flashback archives, allowing you to keep the transactional history of a object since its creation.
  • Flashback archives are enabled on individual tables and are located in tablespaces, and have a name, a specified retention period and a space quota on the tablespace.
  • A database can have multiple flashback archives.
  • when a DML transaction commits on a flashback archive enabled table, the Flashback Data Archiver (FBDA) process stores the pre-image of the rows into a flashback archive.
  • FBDA also manages the data within the flashback archives (purging data beyond retention period).
  • Historical data can be queried using the Flashback Query AS OF clause.
  • Useful for compliance with record stage policies and audit requirements.


To enable flashback archiving for a table:
  • You need FLASHBACK ARCHIVE privilege on a flashback data archive
  • Table cannot be clustered, nested, temporary, remote or external
  • Table cannot have LONG nor nested columns


Create a Flashback Data Archive
(1) Create a new tablespace (you may also use an existing one)

SQL> create tablespace fda_ts
   datafile '/u01/app/oracle/oradata/test112/fda1_01.dbf'
   size 1m autoextend on next 1m;

SQL> select tablespace_name, status, contents, retention
  from dba_tablespaces
  where tablespace_name ='FDA_TS';

TABLESPACE_NAME                STATUS    CONTENTS  RETENTION   
------------------------------ --------- --------- ----------- 
FDA_TS                         ONLINE    PERMANENT NOT APPLY   

(2) Create Flashback archvies:

SQL> create flashback archive default fda_1m tablespace fda_ts   -- Must be SYSDBA to create DEFAULT FDA
  quota 1G retention 1 month;                                -- To change use ALTER FLASHBACK ARCHIVE...SET DEFAULT

SQL> create flashback archive fda_2yr tablespace fda_ts retention 2 year;
  
SQL> create flashback archive fda_10d tablespace fda_ts retention 10 day;

Managing Flashback Data Archives:
(1) Manage FDA tablespaces:

ALTER FLASHBACK ARCHIVE...
   ...SET DEFAULT;
   ... ADD TABLESPACE... QUOTA...;
   ... MODIFY TABLESPACE...
   ... REMOVE TABLESPACE...

(2) Manage retention period:

ALTER FLASHBACK ARCHIVE fda_name MODIFY RETENTION n [Year | Month | day ];

(3) Purge historical data

ALTER FLASHBACK ARCHIVE...
   ...PURGE ALL;                          -- Purge ALL historical data
   ...PURGE BEFORE TIMESTAMP (SYSTIMESTAMP - INTERVAL 'n' DAY);
   ...PURGE BEFORE SCN scn_num;

(4) Drop FDA:

DROP FLASHBACK ARCHIVE fda_name;           -- Drops FDA. Keeps tablespace.

Enabling FDAs on objects:
  • FDA is disabled by default
  • User needs FLASHBACK ARCHIVE privilege to create enable flashback archive on object.
SQL> conn sys/pwd as sysdba;

SQL> grant flashback archive on fda_1m to userA;

SQL> conn userA/pwd;

SQL> Create table emp 
  (empno number primary key,
   ename varchar2(20),
   salary number) 
  flashback archive fda_1m;


-- To Disable Flashback archive on table
SQL> ALTER TABLE emp NO flashback archive;

Information about Flashback data Archives:
DBA_FLASHBACK_ARCHIVE, DBA_FLASHBACK_ARVHIE_TS and DBA_FLASHBACK_ARCHIVE_TABLES
SQL> select owner_name, flashback_archive_name, retention_in_days, status, 
       to_char(last_purge_time, 'dd-mon-yy hh24:mi:ss')
from dba_flashback_archive;

OWNER_NAME    FLASHBACK_ARCHIVE_NAME  RETENTION_IN_DAYS      STATUS  LAST_PURGE_TIME           
------------- ----------------------- ---------------------- ------- ------------------------- 
SYS           FDA_1M                  30                     DEFAULT 25-oct-11 13:34:14 
SYS           FDA_2YR                 730                            25-oct-11 13:34:54 
SYSTEM        FDA_10D                 10                             25-oct-11 13:38:05

SQL> select * from dba_flashback_archive_ts;

FLASHBACK_ARCHIVE_NAME  FLASHBACK_ARCHIVE#     TABLESPACE_NAME                QUOTA_IN_MB 
----------------------- ---------------------- ------------------------------ ------------
FDA_1M                  1                      FDA_TS                         1024        
FDA_2YR                 2                      FDA_TS                                     
FDA_10D                 3                      FDA_TS                                     


SQL> select * from dba_flashback_archive_tables;

TABLE_NAME  OWNER_NAME                     FLASHBACK_ARCHIVE_NAME ARCHIVE_TABLE_NAME   STATUS   
----------- ------------------------------ ---------------------- ------------------- -------- 
EMP         SYSTEM                         FDA_1M                 SYS_FBA_HIST_75434  ENABLED  


Example: Viewing table history.
(1) Insert data on emp
(2) Keep record of some points in time
(3) Query the historical data on emp
SQL> select to_char(systimestamp, 'dd-mon-yy hh24:mi:ss') start_time, 
       current_scn start_scn
     from v$database;

START_TIME         START_SCN              
------------------ ---------------------- 
25-oct-11 14:22:25 1498655       

SQL> select * from emp;

EMPNO                  ENAME                SALARY                 
---------------------- -------------------- ---------------------- 

-- PL/SQL block performs a number of DMLs on emp and prints timestamps
set serveroutput on
declare
 procedure get_timestamp
 is
   v_time varchar2(25);
   v_scn  integer;
 begin
   select to_char(systimestamp, 'dd-mon-yy hh24:mi:ss') start_time, 
        current_scn start_scn into v_time, v_scn
   from v$database;
   dbms_output.put_line('timestamp: ' || v_time);
   dbms_output.put_line('SCN:       ' || v_scn);
end;
 
begin
  insert into emp values (1, 'John', 2000);
  commit;
  dbms_lock.sleep(2);
  get_timestamp();
  for i in 1 .. 10 
  loop
   update emp set salary =salary*1.05 where empno=1;
   commit;
   dbms_lock.sleep(2);
   if i=5 then
     insert into emp values (2, 'Mary', 3000);
     update emp set salary = 2500 where empno =1;
     commit;
     dbms_lock.sleep(2);
     update emp set ename = initcap(ename);
     commit;
     insert into emp values (3, 'Gary', 1500);
     delete from emp where empno=2;
     commit;
     get_timestamp();   
   end if;
  end loop;
  dbms_lock.sleep(2);
  get_timestamp();
end;
/

anonymous block completed
timestamp: 25-oct-11 14:22:27
SCN:       1498659
timestamp: 25-oct-11 14:22:39
SCN:       1498683
timestamp: 25-oct-11 14:22:51
SCN:       1498700

SQL> select * from emp;

EMPNO                  ENAME                SALARY                 
---------------------- -------------------- ---------------------- 
1                      John                 3190.70390625          
3                      Gary                 1500         

SQL> select to_char(systimestamp, 'dd-mon-yy hh24:mi:ss') end_time, 
       current_scn end_scn
     from v$database;
END_TIME           END_SCN                
------------------ ---------------------- 
25-oct-11 14:22:51 1498701


(a) Select data at a point in time
SQL> select *  from emp as of scn 1498683;

EMPNO                  ENAME                SALARY                 
---------------------- -------------------- ---------------------- 
1                      John                 2500                   
3                      Gary                 1500       

SQL> select * 
     from emp as of timestamp to_timestamp('25-oct-11 14:22:51', 'dd-mon-yy hh24:mi:ss');
EMPNO                  ENAME                SALARY                 
---------------------- -------------------- ---------------------- 
1                      John                 3190.70390625          
3                      Gary                 1500  


(b) select all versions of a row betwen two points in time

SQL> select *
     from emp
       versions between scn 1498659 and 1498700
     where empno =1;
EMPNO                  ENAME                SALARY                 
---------------------- -------------------- ---------------------- 
1                      John                 3190.70390625          
1                      John                 2000                   
1                      John                 2100                   
1                      John                 2205                   
1                      John                 2315.25                
1                      John                 2431.0125              
1                      John                 2552.563125            
1                      John                 2500                   
1                      John                 2500                   
1                      John                 2625                   
1                      John                 2756.25                
1                      John                 2894.0625              
1                      John                 3038.765625            
1                      John                 3190.70390625          

 14 rows selected 


SQL> select versions_xid xid, versions_startscn start_scn,
            versions_endscn end_scn, versions_operation operation,
            empno, ename, salary
     from emp
        versions between scn 1498659 and 1498700
     where empno =1;

XID              START_SCN   END_SCN                OPERATION EMPNO   ENAME                SALARY                 
---------------- ----------- ---------------------- --------- ------- -------------------- ---------------------- 
03000F008B040000 1498633     1498674                I         1       John                 3190.70390625          
05001F00AA040000 1498657     1498661                I         1       John                 2000                   
030003008B040000 1498661     1498664                U         1       John                 2100                   
02000A007E040000 1498664     1498667                U         1       John                 2205                   
01000D003A030000 1498667     1498670                U         1       John                 2315.25                
0400090075030000 1498670     1498672                U         1       John                 2431.0125              
06000B0094040000 1498672     1498674                U         1       John                 2552.563125            
0900080096040000 1498674     1498678                U         1       John                 2500                   
03001F008B040000 1498678     1498685                U         1       John                 2500                   
09001F0097040000 1498685     1498688                U         1       John                 2625                   
080010006C050000 1498688     1498691                U         1       John                 2756.25                
0700190078030000 1498691     1498694                U         1       John                 2894.0625              
03001A008B040000 1498694     1498697                U         1       John                 3038.765625            
05001E00AB040000 1498697                            U         1       John                 3190.70390625          

 14 rows selected