Frage zur neuen EPG- Datenbank

  • The epg.db uses only UTC time in my understanding,


    I would have to check the plugin code if there is any time adjustment that needs to me removed/adapted, but as I already said this puts me in a place of coding where i didn't want to be as it is not my Plugin.


    I still have the hope that the providers of the EPG sources are simply using my epgdb.py for "inspiration purpose" and provide supported solutions themselves, because as I already said - it is NOT my focus to implement a full blown EPG Plugin for OE2.2 - I just wanted to provide the missing link and this is more or less working now and allows also DMM to verify and check the enigma2 part.


    Happy New Year also from my side!


    gutemine

  • If you are using this from xmltvconverter.py then it should be go well:


    def get_time_utc(timestring, fdateparse):
    #print "get_time_utc", timestring, format
    try:
    values = timestring.split(' ')
    tm = fdateparse(values[0])
    timegm = calendar.timegm(tm)
    #suppose file says +0300 => that means we have to substract 3 hours from localtime to get gmt
    timegm -= (3600*int(values[1])/100)
    return timegm
    except Exception, e:
    print "[XMLTVConverter] get_time_utc error:", e
    return 0


    UTC and GMT are almost the same but UTC ignores summertime so it is a good constant.


    Update: I am now back with my DM8000 and the programme starts at 23:00 so I had it wrong.

    DM.One AIO, DM920, DM7080 archiviert DM8000 aus Dezember 2008 und eine DM600.

    Einmal editiert, zuletzt von msatter ()

  • Thanks


    The Tip with the settings was to some extent helpfull - Now I see EPG for a lot of more channels, but it is still 28.2 so it didn't depend on satelites.xml.


    Anyway I think for the moment the latest Plugin Version is more or less the maximum I can do at the moment and 1.5 day of effort is also more then I wanted to spend on this whole journey.


    Now the users will also have to add theit time to find out if it works.


    The only remaining thing where I would need your help/input is my Q&D language mapping routine which might not be perfect. Users could check if the lamguages are all properly mapped or if we have missing ones, etc.


    Just Zap to a Channel which has your wanted EPG language then Now Next should give an EPG entry in the database and you can check then the content of the T_Data table what language entry it got. Then try with xmltv input and compare - if you find troubles or missing ones where only the 2 letter language code ist put in -> report so that I can fix it.

    3 Mal editiert, zuletzt von Lost in Translation ()

  • Etwas hat mir doch noch keine Ruhe gelassen - ich hatte ja in der EPGImport.py das Laden mit einem eigenen Thread im Hintergrund disabelt, weil das damit leichter zu debuggen war und das sqlite nicht so 100% Threadsave ist, wie uns Reichi ja gewarnt hat, um wirklich mehrere threads parallel drauf laufen zu lassen.


    Torotzdem habe ich mal testweise wieder das laden mit eigenem Task aufgedreht (damit läuft die epgdb.py als Thread), weil dadurch wird der main thread vom enigma2 nicht so belastet.


    Die Box wird dadurch deutlich agiler während das Laden läuft und es kommen auch kaum mehr die Zahnräder während dem Laden, höchstens noch beim Anfang wenn die epg.db aus dem Memory in den Flash geschrieben wird, bzw. wenn die ganzen alten Events gelöscht werden müssen. Ausserdem ist dadurch der Code gegenüber dem Originalplugin weniger verbogen.


    Und Lock fehler kommen jetzt auch keine mehr - weil auch der standard enigma2 Thread jetzt halt einer ist dessen weiterer Zugriff ignoriert wird.


    Bitte testet also diese r22, ob uns das Laden als Tasks besser gefällt und genauso stabil läuft und keine DB Lock Fehler damit mehr auftreten.


    LG
    gutemine

    2 Mal editiert, zuletzt von Lost in Translation ()

  • Will do.


    I had broken EPG when I imported a second list or several list at the same time. I had the same behaviour as with the the UK channels and I tried commenting out the start-up deletes in epgdb.py and that works fine.


    The UK epg in the web interface I have now times but the day of the date is still zero but that is an bug in the webinterface. I will see if tomorrow all is still good. I leave it up to enigma2 itself to clean-up outdated events.


    ps. I did not see your last posting so I will run r22 and look if I can do some python coding. :winking_face:

    DM.One AIO, DM920, DM7080 archiviert DM8000 aus Dezember 2008 und eine DM600.

  • Thanks for trying and testing, but only multiple runs with different sources should grill the data of the previous one (but this was the same with the epg.dat if i remember right).


    If you run multiple sources in a single import it should work without uncommenting the deletes.


    Please use r23 for further testing - adding usage of epg.db location setting parameter was only 2 lines of code, hence I could not resist.


    There is no neeed to keep your test databases in Flash anymore :smiling_face_with_sunglasses:

    5 Mal editiert, zuletzt von Lost in Translation ()

  • Morning gutemine ! I have downloded the latest version (23) and its going great so fare. I would like to put the database in my USB memory. Could you explain how to do this ?
    I have looking in to this but cant find it.

  • config.misc.epgcache_filename is the needed settings Parameter.

  • But the Plugin doesn't offer the functionality to move the epg.db around, as there is plenty of space in Flash.


    r23 now only supports handling the epg.db there if the parameter was changed.

  • Ok, thanks for explaning gutemine. I just try to keep my flash as low as posible :smiling_face:


    You have a great 31 of december and again i think you are doing a great thing !


    Jootje

  • I don't care on Flash usage - there is a reason why dBackup has a setting to ignore the epg.db when backuping :grinning_squinting_face:

  • Thanks for trying and testing, but only multiple runs with different sources should grill the data of the previous one (but this was the same with the epg.dat if i remember right).


    If you run multiple sources in a single import it should work without uncommenting the deletes.


    I have good and bad news. The good news is that all events are still there and none are mangled/broken after reboots. EVEN the web interface is now stating complete days, so 31.12.2014 instead of 00.12.2014. :smiling_face:


    The bad news is that the EPG clean-up does nothing or is so less that I it does not change the file size of the epg.db (113459KB)


    Dec 30 13:26:44 dm7080 enigma2[179]: [EPGC] cleanupOutdated
    Dec 30 13:27:44 dm7080 enigma2[179]: [EPGC] cleanupOutdated
    Dec 30 13:28:44 dm7080 enigma2[179]: [EPGC] cleanupOutdated
    Dec 30 13:29:25 dm7080 enigma2[179]: [EPGRefresh] Stopping Timer
    Dec 30 13:29:25 dm7080 enigma2[179]: [EPGImport] autostart (1) occured at 1419942565.07
    Dec 30 13:29:25 dm7080 enigma2[179]: [EPGImport] Stop
    Dec 30 13:29:25 dm7080 enigma2[179]: [EPGC] remove channel 0x2f86310
    Dec 30 13:29:25 dm7080 enigma2[179]: [EPGC] db thread stopped
    Dec 30 13:29:25 dm7080 enigma2[179]: [EPGC] data thread finished
    Dec 30 13:29:25 dm7080 enigma2[179]: [EPGC] Saving database from memory


    I had this before and is the EPGC authorised to clean-up events generated by EPGImport (external)?


    By The Way (BTW) I had with the r22 a lot of file-locks when starting the import and before I had it only one in more than thirty runs. It seems to run faster and also in the meantime I have reinstated the out-commented line in Navigation.py. Going now to try r23.


    Update: running r23 had also a few starting problem due to locked database but finally I could get pass it. I did not commented anything out so the r23 was original. The clean-up took ages due that my epg.dat is over 100MB but finally it was completed.
    Sadly the size of the epg.db is still 113.459KB and even worse is that I lost all the events because I only imported one XMLTV instead of the four I normally do. The good news is that I backup-ed it first so I can try again without loosing all the good information.

    DM.One AIO, DM920, DM7080 archiviert DM8000 aus Dezember 2008 und eine DM600.

    Einmal editiert, zuletzt von msatter ()

  • technicall sqlite would offer a vaccum statement that would recover unused space when we do a delete, but as extending the DB takes also time and the load on the following day will the need more-or less the same space in epg.db again I think it would not really make sense to shrink DB and immediatley blow it up again.


    I'm not sure if DMM implemented the cleanup of EPG data with foreign sources as Ghost and Reichi said they changed enigma2 to completely ignore such events and not load the ones from DVB either (well except now/next I think if I read the current behaviour correct)


    This is another reason why the delete of all T* Tables is more or less neeeded on any new load, because I don't want to create also my own house keeping thread in the plugin. As most people will run the load daily at night this inherent deletion of ALL events before loading the new ones should be sufficient I think.


    And yes, Multithreading makes the whole thing also slightly faster because then it runs in an extra thread instead of using the main one from enigma2 which has also other taskt to do and things to handle.


    But it could be that now your approach of always starting with an empty epg.db plays against you - enigma2 starts loading too as epg.db is empty. When the stuff coming via SAT is already loaded, the xmltv thread should not get that much interrupts.


    I test this always when Fashion TV HD is tuned, as this channel doesn't broadcast any EPG and is not included in the xmltv sources either it is a perfect candidate not to interfere in any case.


    But as I already said DMM will probably need to enhance the enigma2 code slightly further to really let the whole thing happen smoothly - at the moment they only added what was urgently needed to make it work at all.


    But let's see what the other users will find, for the moment I don't plan any enhancements or fixes, and if it works for a few days then probably I will make a 2.1 version out of the r23 as a kind of final use-as-it-s version.


    PS: difference between r22 and r23 is only the support of the epg.db settings parameter, besides this the versions are identically and should behave identically.

  • I saw you edit - if you want you could try adding auto_vacuum=FULL as an option when connecting the database then it should auto-shrink on the deletes, but you would neeed them to verify if and how much this further slows down the deletes.


    Or you add something like this after every delete statement:


    cursor.execute("vacuum my_table")

  • About cleaning...it would seems to me the best way is for DMM to do it when reading into memory from the epg.db and then ignore all events that are older than now-keep outdated EPG. Keep outdated EPG is in Menu-Setup-System-Customize.
    Secondly on erasing events the check if there is one than more events with the same pointers to sub records as title, description, etc so that these are only erased if event is unique....it can become unique if all the past events are erased earlier and this is so last one.


    Next time the memory is written out into epg.db it does not contains obsolete events and added are the new events.


    I take always a long time to type so I will now restore epg.db and add the cursor.execute("vacuum my_table") .

    DM.One AIO, DM920, DM7080 archiviert DM8000 aus Dezember 2008 und eine DM600.

  • Hahahaha with this command, cursor.execute('vacuum my_table'), all is vacuumed and I am left with a file of 1.585KB after import one small XMLTV into the BIG epg.dat.


    Next is to do it with all the four XMLTV imports and then it should work better.


    Update 1:
    When the database locks at startup I get this message:


    Exception AttributeError: "epgdataclass instance has no attribute 'epg'" in <bound method epgdataclass.__del__ of <Plugins.Extensions.EPGImport.epgdata_importer.epgdataclass instance at 0x2d2d490>> ignored


    Update 2:


    Run with three XMLTV imports due to that the UK epg is half functional and the filesize is 65.270KB so the vacuuming works. I have only one cursor.execute('vacuum my_table') in there between the last cursor.execute('DELETE FROM... the commit.


    Question, on starting import the epg.dat is written again with epg-journal.db and can't we do the import totally in memory en at the end write it out?



    I have to do some shopping so it will be evening before I am on-line again.

    DM.One AIO, DM920, DM7080 archiviert DM8000 aus Dezember 2008 und eine DM600.

    Einmal editiert, zuletzt von msatter ()

  • Trying is about gaining wisdom.


    The try:/except are causing this error message, but if I start removing all these try/except I'll have to re-write the whole damned thing - no chance :pinch:


    And yes, now that the base code/logic works we could optimize performance.


    But before doing everything in memory just to gain speed and risk corruption we should consider Reichi's advice to wrap everything in a bing begin/end Transaction.


    And the multiple cursors are not really needed, this was a preperation for doing multi threaded updates which we then decided against.

    Einmal editiert, zuletzt von Lost in Translation ()

  • I get the lock while the file is written out to storage. If there is corruption in the memory then on the next restart the old epg.db is read in.

    DM.One AIO, DM920, DM7080 archiviert DM8000 aus Dezember 2008 und eine DM600.

  • well, maybe but as long as all these try/except are there this still could create an inconsistend DB which I don't like.

  • attached is an r24 without any commits - just a single begin/end Transation as Reichi suggested - but performance gain is not that great.


    The real advantage ist that now everything is a big rollback in case an exception happens - which is likely the way the whole plugin is coded.


    But this would need to be tested - for example by killing enigma2 while the load is running and check if old epg data survives.

    Einmal editiert, zuletzt von Lost in Translation ()