Fusion Middleware Upgrading Oracle Business Intelligence


Hasan ÖZ (Computer Engineering)

Business Intelligence, Data Science, Database Scripting languages, Data Warehouse, Data Integration-ETL, Data Modeling, Dashboards, Big Data Analysis, Designing Cubes and Dimensions






Performing a 12c Upgrade with a New Install


Quick Reference for OBI 12c Upgrade from 12.2.1 to


Oracle Business Intelligence Out of Place Upgrade – 11g to 12c




Tips for Upgrading OBIEE 12C – Oracle Business Intelligence Guide


Upgrade OBIEE 11g to OBIEE 12c


Upgrading To OBIEE 12c – Key Things You Need To Know


OBIEE 11g Migration to OBIEE 12c on Oracle Linux Server release 6.7


How to fix ID error in reports after upgrade from OBIEE 11g to 12c


Issues identified after upgrading from OBIEE 11g to OBIEE 12c


Art of BI: OBIEE 12c Upgrade Gone Wrong? Bring in Datavail


OBIEE 12c Baseline Validation Tool


Upgrade OBIEE 11g to 12C



OBIEE 12c ( post upgrade challenges – unable to save HTML based dashboard reports


OBIEE 12c and Oracle BI Applications (OBIA) support




Posted in OBIEE | Leave a comment

Unable to start Cognos Configuration because the configuration data is locked by another instance


Hasan ÖZ (Computer Engineering)

Business Intelligence, Data Science, Database Scripting languages, Data Warehouse, Data Integration-ETL, Data Modeling, Dashboards, Big Data Analysis, Designing Cubes and Dimensions


When you start IBM Cognos Configuration, you encourage below message “The configuration data is locked by another instance…”

cogstartup lock

Solution is here:


Technote (troubleshooting)


Using cogconfig.sh -s, occasionally get an error saying that the configuration has been locked.


Unable to start Cognos Configuration because the configuration data is locked by another instance of Cognos Configuration



Cognos Configuration is either already in use, or was last shutdown with errors.


UNIX (all platforms)

Resolving the problem

When Cognos Configuration is started, a cogstartup.lock file is created. If this file is present, the lock is enabled. Delete this file to release the lock.

From the Cognos Configuration User Guide:

“You may get an error message that the configuration data is locked by another instance of Cognos Configuration. When you start Cognos Configuration, it checks to see if the cogstartup.lock file exists in c10_location/configuration. The file may exist if a previous instance did not shut down properly or if another instance of Cognos Configuration is running. If another instance of Cognos Configuration is running, you should exit that instance. Otherwise, any changes you make to the local configuration may result in errors. If no other instance of Cognos Configuration is running, delete the cogstartup.lock file in c10_location/configuration. If the Cognos component service is stopped, click Start.”

Posted in IBM Cognos Administration, Uncategorized | Leave a comment

Apache Spark Core, Resilient Distributed Dataset RDD

Spark also makes it possible to write code more quickly as you have over 80 high-level operators at your disposal. To demonstrate this, let’s have a look at the “Hello World!” of BigData: the Word Count example. Written in Java for MapReduce it has around 50 lines of code, whereas in Spark (and Scala) you can do it as simply as this:

            .flatMap(line => line.split(" "))
            .map(word => (word, 1)).reduceByKey(_ + _)

Another important aspect when learning how to use Apache Spark is the interactive shell (REPL) which it provides out-of-the box. Using REPL, one can test the outcome of each line of code without first needing to code and execute the entire job. The path to working code is thus much shorter and ad-hoc data analysis is made possible.

Additional key features of Spark include:

  • Currently provides APIs in Scala, Java, and Python, with support for other languages (such as R) on the way
  • Integrates well with the Hadoop ecosystem and data sources (HDFS, Amazon S3, Hive, HBase, Cassandra, etc.)
  • Can run on clusters managed by Hadoop YARN or Apache Mesos, and can also run standalone

The Spark core is complemented by a set of powerful, higher-level libraries which can be seamlessly used in the same application. These libraries currently include SparkSQL, Spark Streaming, MLlib (for machine learning), and GraphX, each of which is further detailed in this article. Additional Spark libraries and extensions are currently under development as well.

Spark Core

Spark Core is the base engine for large-scale parallel and distributed data processing. It is responsible for:

  • memory management and fault recovery
  • scheduling, distributing and monitoring jobs on a cluster
  • interacting with storage systems

RDD Resilient Distributed Dataset

Spark introduces the concept of an RDD (Resilient Distributed Dataset), an immutable fault-tolerant, distributed collection of objects that can be operated on in parallel. An RDD can contain any type of object and is created by loading an external dataset or distributing a collection from the driver program.

RDDs support two types of operations:

  • Transformations are operations (such as map, filter, join, union, and so on) that are performed on an RDD and which yield a new RDD containing the result.
  • Actions are operations (such as reduce, count, first, and so on) that return a value after running a computation on an RDD.

Transformations in Spark are “lazy”, meaning that they do not compute their results right away. Instead, they just “remember” the operation to be performed and the dataset (e.g., file) to which the operation is to be performed. The transformations are only actually computed when an action is called and the result is returned to the driver program. This design enables Spark to run more efficiently. For example, if a big file was transformed in various ways and passed to first action, Spark would only process and return the result for the first line, rather than do the work for the entire file.

By default, each transformed RDD may be recomputed each time you run an action on it. However, you may also persist an RDD in memory using the persist or cache method, in which case Spark will keep the elements around on the cluster for much faster access the next time you query it.

Source: https://www.toptal.com/spark/introduction-to-apache-spark

Posted in Apache Spark, Uncategorized | Leave a comment

Apache Spark

Spark genel kullanıma açık bir veri işleme motorudur. Sunmuş olduğu API ve araçlar veri bilimciler ve uygulama geliştiricilerin kendi uygulamalarına Spark’ı entegre etmelerine olanak tanımaktadır. Böylece veriyi hızlı biçimde sorgulamak, analiz etmek ve dönüştürmek ölçümlenebilir şekilde mümkündür. Spark’ın esnekliği onu neredeyse kullanım durumlarında karşılaşılan tüm problemleri çözmede uygun bir araç olmasını ve Spark’ın petabyte’larca bilgiyi fiziksel ve sanal sunuculardan oluşan bir küme üzerinde dağıtık biçimde işleyebilmesine olanak sağlıyor.


Posted in Apache Spark | Leave a comment


Kaynak Site: İsmail ADAR

Stored Procedure’lerin Çağrılma Sayısının Bulunması

Select DB_NAME(st.dbid) DatabaseName,
OBJECT_SCHEMA_NAME(st.objectid,dbid) +’.’+OBJECT_NAME(st.objectid,dbid) StoredProcedure,
MAX(cp.usecounts) ExecutionCount
FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
WHERE DB_NAME(st.dbid) IS NOT NULL AND cp.objtype = ‘proc’
OBJECT_SCHEMA_NAME(st.objectid,dbid) +’.’+OBJECT_NAME(st.objectid,dbid)
ORDER BY MAX(cp.usecounts) DESC

T-SQL Pivot Komutu Kullanımı

SELECT sp.StateProvinceCode
FROM Person.Address a
INNER JOIN Person.StateProvince sp
ON a.StateProvinceID = sp.StateProvinceID
) k
FOR StateProvinceCode IN([AZ],[CA],[TX])
) AS pvt;

T-SQL ile Gruplanan Değerlerin Çarpılması

INSERT INTO #Sonuc VALUES(‘Futbol’,3)
INSERT INTO #Sonuc VALUES(‘Futbol’,6)
INSERT INTO #Sonuc VALUES(‘Futbol’,2)
INSERT INTO #Sonuc VALUES(‘Basketbol’,7)
INSERT INTO #Sonuc VALUES(‘Basketbol’,4)
INSERT INTO #Sonuc VALUES(‘Basketbol’,5)
INSERT INTO #Sonuc VALUES(‘Voleybol’,8)
INSERT INTO #Sonuc VALUES(‘Voleybol’,2)

SELECT OyunAd, SUM(Oran) as Toplam,EXP(SUM(LOG(Oran))) as Carpim
FROM #Sonuc


Posted in Uncategorized | Leave a comment

FORMAT (Transact-SQL)



FORMAT ( value, format [, culture ] )

Returns a value formatted with the specified format and optional culture in SQL Server 2017. Use the FORMAT function for locale-aware formatting of date/time and number values as strings. For general data type conversions, use CAST or CONVERT.


Expression of a supported data type to format. For a list of valid types, see the table in the following Remarks section.

nvarchar format pattern.

The format argument must contain a valid .NET Framework format string, either as a standard format string (for example, “C” or “D”), or as a pattern of custom characters for dates and numeric values (for example, “MMMM DD, yyyy (dddd)”). Composite formatting is not supported. For a full explanation of these formatting patterns, consult the .NET Framework documentation on string formatting in general, custom date and time formats, and custom number formats. A good starting point is the topic, “Formatting Types.”1

Optional nvarchar argument specifying a culture.

If the culture argument is not provided, the language of the current session is used. This language is set either implicitly, or explicitly by using the SET LANGUAGE statement. culture accepts any culture supported by the .NET Framework as an argument; it is not limited to the languages explicitly supported by SQL Server. If the culture argument is not valid, FORMAT raises an error.

Return Types

nvarchar or null

The length of the return value is determined by the format.


FORMAT returns NULL for errors other than a culture that is not valid. For example, NULL is returned if the value specified in format is not valid.

The FORMAT function is non-deterministic.

FORMAT relies on the presence of the .NET Framework Common Language Runtime (CLR).

This function cannot be remoted since it depends on the presence of the CLR. Remoting a function that requires the CLR, could cause an error on the remote server.

FORMAT relies upon CLR formatting rules, which dictate that colons and periods must be escaped. Therefore, when the format string (second parameter) contains a colon or period, the colon or period must be escaped with backslash when an input value (first parameter) is of the time data type. See D. FORMAT with time data types.

DECLARE @d DATETIME = ’08/23/2017′;

FORMAT ( @d, ‘d’, ‘tr-tr’ ) AS ‘TR Turkey Result’
,FORMAT ( @d, ‘d’, ‘en-US’ ) AS ‘US English Result’
,FORMAT ( @d, ‘d’, ‘en-gb’ ) AS ‘Great Britain English Result’
,FORMAT ( @d, ‘d’, ‘de-de’ ) AS ‘German Result’
,FORMAT ( @d, ‘d’, ‘zh-cn’ ) AS ‘Simplified Chinese (PRC) Result’;

TR Turkey Result: 23.08.2017

FORMAT ( @d, ‘D’, ‘tr-tr’ ) AS ‘TR Turkey Result’
,FORMAT ( @d, ‘D’, ‘en-US’ ) AS ‘US English Result’
,FORMAT ( @d, ‘D’, ‘en-gb’ ) AS ‘Great Britain English Result’
,FORMAT ( @d, ‘D’, ‘de-de’ ) AS ‘German Result’
,FORMAT ( @d, ‘D’, ‘zh-cn’ ) AS ‘Chinese (Simplified PRC) Result’;


‘dd/MM/yyyy’ = FORMAT(@Date,’dd/MM/yyyy’)
, ‘DateKey’ = FORMAT(@Date,’yyyyMMdd’)

DateKey Result: 20170823


Posted in MSSQL | Leave a comment

Display custom message in No Results view in OBIEE

Main web address of this article :  http://www.bifuture.com/display-custom-message-results-view-obiee/

How to change or remove the No Results message in OBIEE?

When the results of an analysis return no data, the following default message is displayed to users:


We can modify the No Results view, by adding custom message or more explanation on the use of report or hints on how to filter values.  We can also change it’s visual formatting.

To do that, go to Results tab and click on Edit Analysis Properties:


In Analysis Properties change the No Results Settings to Display Custom Message:


In the No Results view we can add custom HTML and CSS styles, so even very complex pages with links, images etc. can be created. We will change the default message by adding report Header, changing the custom message its text formatting and hiding the Refresh button. We will use the p selector to change the paragraph text to grey, specify the Header (ErrorTitle class) font size and color and hide the Edit / Refresh links (ResultLinksCell class) using display: none property.

Here’s our custom No Results Settings with custom styles and message:


If you choose Display Custom Message, but leave blank fields for Header and Message, the No Results view would display nothing but Edit / Refresh links (which you can also remove using CSS display: none property)

This is how the No Results view will look like after our changes:


Because by default there is no button to go back from the No Results view, you can add a custom button to go back to report using HTML.


Posted in OBIEE | Leave a comment