'Bluez Android external bluetooth devices

I am working on bluetooth application using bluez which is integrated with external bluetooth devices, I need to do all the bluetoothctl activity on my new framework. I am new to this environment. Is there any possibility to call the entire bluetoothctl on my framework or do I have to call some other files from bluez.

how to add bluez and dbus in my makefile



Solution 1:[1]

I'd narrow down the MYLOGGING table first so it has less to search from and put it into another subquery. Narrowing it down by date. Since FROM takes place before WHERE.

select distinct i.TERMINALNAME
, to_char(i.BEGINTIME,'mm/dd/yyyy hh:mi:ss AM') BEGINTIME
, i.ERRORTEXT
, i.RECORDSPROCESSED
, i.RECORDTYPE 
from 
(select i2.*
 from MYLOGGING i2
 where i2.BEGINTIME >= trunc(sysdate) - 60
) i
inner join (select recordtype
            , max(BEGINTIME) as lastrundate 
            from MYLOGGING 
            group by recordtype
           ) im on im.recordtype=i.recordtype 
            and im.lastrundate=i.BEGINTIME
     where i.ERRORTEXT in ('Success', 'Failure') and i.TERMINALNAME Not In ('REE300', 'XEE300', 'YT', 'QX', 'VC', 'DF')  ORDER BY i.TERMINALNAME ASC;

Solution 2:[2]

You can probably avoid the performance problem by converting the self-join into an analytic function. Without the join, there's less work to do and fewer ways for the optimizer to choose a bad plan.

select distinct
    TERMINALNAME, 
    to_char(BEGINTIME,'mm/dd/yyyy hh:mi:ss AM') BEGINTIME, 
    ERRORTEXT, 
    RECORDSPROCESSED, 
    RECORDTYPE
from
(
    select MYLOGGING.*, max(BEGINTIME) over (partition by recordtype) MAX_BEGINTIME_PER_RECORDTYPE
    from MYLOGGING
)
where BEGINTIME = MAX_BEGINTIME_PER_RECORDTYPE
  and ERRORTEXT in ('Success', 'Failure') 
  and TERMINALNAME Not In ('REE300', 'XEE300', 'YT', 'QX', 'VC', 'DF') 
  and BEGINTIME>= trunc(sysdate) - 60
ORDER BY TERMINALNAME ASC;

Finding out why the original query ran slowly can be more difficult. You'll want to find the actual execution plan with actual numbers, instead of using only the explain plan guesses. See this question for information on how to use /*+ GATHER_PLAN_STATISTICS */ and DBMS_XPLAN to display the execution plan.

The full execution plan, especially the data in the "note" section, may give you a clue as to why the plan was slow and why it got faster. Perhaps there was a dynamic reoptimization, where Oracle recognized the first run was bad, and adjusted the plan. You may also want to try using DBMS_XPLAN.DISPLAY_AWR to see a history of execution plans.

The actual time, and the actual number of rows compared with the estimated number of rows, can often tell you how and why Oracle made a bad decision. For example, the extra WHERE clause may have caused Oracle to significantly under-estimate the number of rows returned, which made an index access and nested loops operation look faster. But for several reasons, nested loops and index scans usually work best for returning a small percentage of rows, whereas hash joins and full table scans usually work best for returning a large percentage of rows.

Gathering and interpreting this data can take hours, especially if you're not familiar with the process. You could take this opportunity to learn more about query tuning, or if you're out of time, just use the above analytic function approach and avoid the problem entirely.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Dougietin1
Solution 2 Jon Heller